jinyangTF

Jin Yang

@jinyangTF · Protect main branch at all cost
GitHub Profile
casual and collaborative with humble undertones
A collaborative and thoughtful reviewer who combines technical precision with a conversational, approachable tone. Frequently acknowledges mistakes and asks clarifying questions, showing genuine engagement with the code and learning from discussions.
1073
Comments
532
PRs
15
Repos
123
Avg Chars
2
Harshness

Personality

Humble and quick to acknowledge mistakes Collaborative and team-oriented Detail-oriented but not overly critical Conversational and informal in communication Appreciative and thanking teammates Thoughtful about future implications Self-aware about knowledge gaps Practical and solution-focused

Greatest Hits

"my bad"
"Thanks Frank!"
"Right! I missed this"
"Got it! thanks"
"i think it is best that we discuss"
"let me know if theres a better place"
"Oh yea i can look into splitting it"
"imo i think they should be treated the same"

Focus Areas

Common Phrases

"I think" "thanks" "my bad" "let me know" "i will" "that means" "Right!" "Got it!" "Oh yea" "im curious" "imo" "looks good" "Thanks Frank" "As discussed" "i dont have enough context"

Sentiment Breakdown

positive
32
questioning
52
neutral
665
constructive
52
critical
4
harsh_questioning
5

Review Outcomes

APPROVED
327
CHANGES_REQUESTED
6
COMMENTED
9

Most Reviewed Authors

jinyangTF
512
ayc1
153
SuveenE-TF
99
mingyang-tinyfish
55
KateZhang98
39
jinxz01
36
wjwjtf
36
jayfish0
27
lozzle
21
paveldudka
21

Spiciest Comments

friday/#817 [view]
im almost certain this breaks sth. langgraph v3 changes the way they package prebuilt components. pls dismiss if im wrong
friday/#736 [view]
okay those are just my 2 cents. i dont have context as to what the `PathFinder` team has in mind for `additional_tools`
friday/#746 · pyt/friday/learner/nodes/generator.py [view]
this is wrong. previously i was checking `script.result`, here u are checking `script.result.result`. Based on the way u have refactored the code, u can drop this guard clause
friday/#722 · pyt/friday/models/artifact_metadata.py [view]
i think it would be nice to add a `description` within the `Field` explaining why `xpaths` is a list, what the order is (exactly what u wrote in the PR description) + what the `message` is for
friday/#653 · pyt/app/graph.py [view]
this is wrong i think it shouldnt be here/used at all
friday/#318 · pyt/app/main.py [view]
1. if `not success`, what the llm outputs isnt a `goal` - it's a `question`. I think andrew's naming of it as `result` to unify the two attributes is fine, but naming it as `goal` is misleading. 2. You lose the `AIMessage` and `HumanMessage` context in your implementation, but not in andrew's/the current one in main branch. You can, however, let `call_llm` take in a `messages` array and i guess it will be effectively the same 3. In all 3 implementations (urs, andrew's, and the current one,
goldfish/#260 · app/response/prompts/gpt4_vision/generation/baseline.py [view]
This header is part of the system prompt. @zifanwTF can advise more
goldfish/#189 · app/response/prompts/claude/generation/scratchpad.py [view]
@zifanwTF @mingyang-tinyfish Our prompts now take in query processing configs and have individual test cases like what is on the main branch. I haven't got to implementing the `PromptManager` class that will deprecate these manual insertions. I can take over this PR and write the test cases if you would prefer that zifan?
goldfish/#420 [view]
> interesting that temp=0 gives lower consistency. do we know which exact test case it's failing and what the consistency difference is? consistency is the same. i think u mean accuracy drops. that is possible when temp is lowered yea -> The correct token is the less probable one, and lowering the temp makes it such that they are less likely to get chosen, dropping accuracy

AI Persona Prompt

You are jinyangTF, a collaborative and humble code reviewer. Your style is conversational and informal, using lowercase for 'i' and casual contractions. You frequently acknowledge your own mistakes with phrases like 'my bad' and 'Right! I missed this'. You're genuinely curious about implementation details and often ask clarifying questions starting with 'I think' or 'im curious'. You thank teammates often, especially 'Thanks Frank!' and other specific callouts. When you spot issues, you explain them thoughtfully rather than just pointing them out. You're not afraid to admit when you lack context ('i dont have enough context about'). You use 'imo' for opinions and 'yea' instead of 'yes'. You care about consistency, user experience, and proper error handling. You often suggest follow-up PRs rather than blocking current work. Your tone is supportive - you celebrate good work with 'looks good' and 'lgtm' while still catching important details. You reference offline discussions and cross-team coordination frequently. Keep responses conversational, humble, and technically insightful while maintaining your casual, collaborative voice.

Recent Comments (810 total)

friday/#637 Handle langgraph recursion limit reached error bugfix [view]
lgtm. lets get into the practice of reproducing the happy path of ur change at least once before we land any PRs. so in ur case, reproduce the fact that we force terminate at 97, and the run ends gracefully. u would have caught the mistake in ur last PR if u gave it a run
friday/#630 Gracefully terminate run with learnings and summary · pyt/app/components/summarizer.py [view]
huh why is this validation needed. `done` isnt optional in ur schema what
friday/#630 Gracefully terminate run with learnings and summary · pyt/app/components/summarizer.py [view]
remove this return statement and use the default one below
friday/#630 Gracefully terminate run with learnings and summary · pyt/app/components/summarizer.py [view]
i still think there should be an inheritance relationship between `SummarizerResponse` and `RecursionLimitSummarizerResponse`
friday/#630 Gracefully terminate run with learnings and summary · pyt/app/models/agent_graph.py [view]
oh is this attribute automatically updated by the langgraph module? Cool!
friday/#630 Gracefully terminate run with learnings and summary · pyt/app/components/summarizer.py [view]
I see! didnt think about the Optional <> Compulsory part. My bad and ur implementation is better
friday/#630 Gracefully terminate run with learnings and summary [view]
could u make a comment within the summarizer that we are using the same system prompt for both branches. We can consider splitting the system prompt in the future too if there is a need.
friday/#400 Log langsmith url for cancelled hotel tests · pyt/tests/integration/hotel_test.py [view]
Edit: i think u shouldnt name this `main`. @SuveenE-TF mentioned keeping the name distinct for each file
friday/#400 Log langsmith url for cancelled hotel tests · pyt/tests/integration/hotel_test.py [view]
```suggestion await friday.browser.stop() if friday ```
friday/#400 Log langsmith url for cancelled hotel tests · pyt/tests/integration/hotel_test.py [view]
sorry this is not the pythonic way of doing things. @SuveenE-TF is right
friday/#400 Log langsmith url for cancelled hotel tests [view]
only kate can stamp this
friday/#308 Implement open new tab tool · pyt/app/tools/open_new_tab.py [view]
i dont have enough context about whether url can be `None`, but if it can be, it is generally better to write the code in the form of a guard clause ```bash if not url: return new_page.goto(url) new_page.wait_for_page_ready_state() ``` So that the happy path is not nested
friday/#478 remove token metadata from message history [view]
wait this PR slipped through the cracks. I think it is a good change 😢
friday/#711 [learner] boilerplate for code gen + code validation for llm code · pyt/friday/learner/nodes/generator.py [view]
We should keep this yea. if `_validate_generated_code_format` throws an error, we dont need to go through the whole execution phase. This speeds up the process significantly for both us engineers and also the end-user next time
friday/#711 [learner] boilerplate for code gen + code validation for llm code [view]
i think u should pull from main first. quite a lot has changed and i need to see how ur passing the error message back into the langgraph state for subsequent turns (if validation fails)