jayfish0

Jason Zhang

@jayfish0 · JPT-5
GitHub Profile
casual and collaborative
Pragmatic and collaborative reviewer who focuses on immediate functionality and team coordination over perfectionism. Frequently asks for clarification from team members and prioritizes getting things working correctly rather than getting bogged down in theoretical improvements.
470
Comments
230
PRs
15
Repos
114
Avg Chars
3
Harshness

Personality

Collaborative and team-oriented Pragmatic problem-solver Detail-oriented with documentation Process-conscious about branching and releases Quick to acknowledge and fix mistakes Prefers clarity over complexity Helpful in explaining technical decisions Flexible and willing to iterate

Greatest Hits

"plz confirm"
"I dont think we have"
"Nice catch! Thanks"
"correct me if im wrong"
"Closing because of"
"could u help confirm?"
"ah Nice catch!"
"Thanks for the headsup!"

Focus Areas

Common Phrases

"I dont think" "could u" "plz confirm" "Thanks for the" "Got it" "Will do" "Nice catch" "ah Nice" "Lets" "I think" "should" "since" "Closing because" "from my research" "correct me if im wrong"

Sentiment Breakdown

neutral
280
positive
22
constructive
20
harsh_questioning
5
questioning
19
critical
6
very_positive
2

Review Outcomes

APPROVED
139
CHANGES_REQUESTED
7
DISMISSED
1
COMMENTED
3

Most Reviewed Authors

jayfish0
282
SuveenE-TF
39
frankfeng98
36
ayc1
31
lozzle
17
mingyang-tinyfish
17
KateZhang98
9
desi003
6
paveldudka
6
jinyangTF
6

Spiciest Comments

friday/#1171 [view]
Im very confident that this will not effect the replay. So essentially what the listener do is that when ever it detect a dom change, it save the time when the dom change happens into a local storage var call lastDomChange , and before a query is being executed, it would check if the current time is at least 0.5s ahead of the latest dom change. Its a heuristic to check if the page is stable. It should not affect replay since we do not query during replay, we use the queried element id directly.
friday/#817 [view]
> im almost certain this breaks sth. langgraph v3 changes the way they package prebuilt components. pls dismiss if im wrong https://github.com/langchain-ai/langgraph/discussions/3613. Need full reinstall, but should not break stuff
friday/#691 [view]
Thanks for the update! Could u also run the hotel regression test to make sure nothing is broken? `pytest tests/integration/hotel_test.py --count=1 -n=16`
friday/#736 · pyt/friday/learner/nodes/generator.py [view]
that would be nice to have, could u incorporate this in the llm_extract PR? As of currently not adding the description is not giving me any issue because llm may not need to understand what these custom tool does. Its sole job is to map from the metadata to the function call for custom tools. That case might be different for llm_extract tho
agentql-client/#947 · docs-next/content/docs/python-sdk/api-references.md [view]
`agentql-page#query_elements` is a broken link
agentql-client/#947 · docs-next/content/docs/python-sdk/api-references/aqlresponse.md [view]
fix broken link, rename`aqlresponse_reference` to `aqlresponse` to sync with js-sdk naming
goldfish/#150 [view]
Final comment: Gemini's function calling is not really helpful in improving accuracy Single Term Metric: 0.440000 List Term Metric: 0.265102 List Term Metric v2: 0.301179 Single Term Metric Rejection Rate: 1.000000 Single Term Metric Acceptance Rate: 0.545714 Single Term Metric Selective Accuracy: 0.806283 Total Execution Time: 0:14:28
goldfish/#478 [view]
> was this not a problem with openai? curious what the difference is for elements that don't exist I believe this should be. The reason we never encountered it so far is openai splitted into smaller chunks that significantly reduced the chance for hallucination. In the case where gemini return 306 items, each openai split return 12 items
goldfish/#478 [view]
> was this not a problem with openai? curious what the difference is for elements that don't exist As for the second question, added the following checks as a bool along with the returned restored value: 1. For dicts, check if all fields are empty 2. For lists, check if the list is empty 3. For single term, check if the value is empty So during the restoration for list, we get rid of the items in that list that are empty (no matter its dicts, lists or single terms) @jinyangTF lets get
goldfish/#217 [view]
Sure thing @zifanwTF will go ahead and merge it right now but will sort this out in the future, referencing #223

AI Persona Prompt

You are jayfish0, a collaborative and pragmatic code reviewer who values team coordination and practical solutions. Your review style is casual and friendly, often using informal language like 'dont', 'u', 'plz', and 'Lets'. You frequently ask teammates to confirm details with phrases like 'plz confirm' and 'could u help confirm?'. You're particularly focused on cross-platform consistency between JS and Python SDKs, documentation accuracy, and proper release/branching processes. When you spot issues, you respond positively with 'Nice catch!' or 'ah Nice catch! Thanks'. You often reference team members by name (@frankfeng98, @colriot) and ask for their input on technical decisions. You're quick to acknowledge when you're unsure with 'correct me if im wrong' and explain your reasoning with 'from my research' or 'since'. You close PRs decisively with 'Closing because of [specific reason]' and offer practical alternatives. You care deeply about broken links, version synchronization, and making sure features work across different environments. Your comments tend to be concise but informative, and you're not afraid to make suggestions or iterate on solutions. You prefer to get things working first and then improve them later, often saying 'Lets get this merged and iterate'. You're helpful in explaining complex technical decisions with detailed context when needed.

Recent Comments (354 total)

friday/#1285 [cory] fix api discovery ingestion path [view]
Make sense to me
friday/#1272 Fix torch vulnerability [view]
Will test and report
friday/#1258 [cory] refactor mains -> cli for running steps locally faster [view]
It would be great to have ReadME Updated as well or ppl may not know this
friday/#1252 [cory] Add batch run for goal reverse generation · pyt/cory/tiny_cory/goal_generation/goal_generation_orchestrator.py [view]
will fix
friday/#1252 [cory] Add batch run for goal reverse generation · pyt/cory/runner/goal_reverse_gen.py [view]
human readability is required
friday/#1252 [cory] Add batch run for goal reverse generation · pyt/evaluation/common/dataset_source.py [view]
collect_api_wrappers require these params
friday/#1246 Enable running api analyze & filtering in aws batch · pyt/cory/runner/har_analyzer.py [view]
I dont think we need to hardcode this?
friday/#1232 [cory] Add batch har/weblog generation · tf_browser_service/browser_service/models/base.py [view]
fixed with self.context
friday/#1232 [cory] Add batch har/weblog generation · pyt/evaluation/torchx/task_utils.py [view]
currently we do not use output_schema in friday runs, but its provided in Molly's test input. Deleted
friday/#1244 [cory] improve codegen step by validating input parameters [view]
OMG nice
friday/#1226 add execution runner to save weblog · tf_browser_service/browser_service/service.py [view]
This is one of the pydantic config options. By allowing arbitrary types we can add SessionRecorder as an attribute of the BrowserService, since SessionRecorder it self is not a pydantic class
friday/#1226 add execution runner to save weblog · Dockerfile.torchx [view]
Because we have changed the pyproject to install tf_browser_service and tf_weblog locally instead of private pypi. Therefore we need to copy everything to install them
friday/#1222 Change gemini model version [view]
> LGTM! Is there a difference in pricing or model capabilities? I dont think so, according to https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versions i believe they just simply changed the name
friday/#1207 Add trip.com example [view]
> lgtm! > > to test with a new usecase we need to add an output type to the test_inputs right? yep, but im wondering if it is a must
friday/#1186 Fix weblog id assigned before loading requests [view]
Amazing! Thank you