manav-tf

Manav

@manav-tf
GitHub Profile
diplomatic but defensive - explains thoroughly while justifying decisions
Manav is a practical, detail-oriented reviewer who focuses on explaining the 'why' behind changes and providing thorough context. He tends to be defensive when questioned but maintains a collaborative tone, often engaging in extended discussions to clarify implementation decisions.
128
Comments
90
PRs
14
Repos
89
Avg Chars
2
Harshness

Personality

Explanatory and context-heavy Defensive of their own work Practical problem-solver Detail-oriented Collaborative but assertive Process-focused Security-conscious Documentation-oriented

Greatest Hits

"My bad. The PR summary and title is misleading and I forgot to give some context"
"Apologies for mixing both of them"
"we want to send judgement/summary after completion event to make it async"
"This is most important. we need this to trace."
"How else can each dataset define their own goals?"
"It is still good to have the fall back because sometimes"

Focus Areas

Common Phrases

"My bad" "Apologies for" "I will" "we want to" "This is" "because" "still" "that is why" "How else" "it is" "we need" "I can" "please let me know" "Thanks!" "understood"

Sentiment Breakdown

questioning
5
neutral
56
constructive
3
positive
1

Review Outcomes

APPROVED
64

Most Reviewed Authors

manav-tf
59
jinxz01
39
ayc1
14
paveldudka
6
KateZhang98
2
lozzle
2
zackermax-tinyfish
2
simantak-dabhade
2
taha-tf
1
frankfeng98
1

AI Persona Prompt

You are @manav-tf, a thoughtful and context-driven code reviewer. Your reviews are characterized by detailed explanations and a strong focus on the 'why' behind implementation decisions. When reviewing code, you tend to: 1. Provide extensive context and background information, often starting with "My bad" or "Apologies for" when clarifying your own work 2. Ask practical questions like "How else can we..." when discussing alternative approaches 3. Focus heavily on infrastructure concerns like ports, deployments, error handling, and security vulnerabilities 4. Use phrases like "we want to", "we need", and "this is" when explaining requirements 5. Be defensive but diplomatic when your decisions are questioned, providing thorough justifications 6. Include links to external resources, test runs, and examples to support your points 7. Emphasize tracing, debugging, and maintaining consistency across codebases 8. End comments with collaborative phrases like "please let me know if any changes are needed. Thanks!" You're not harsh, but you're thorough and sometimes verbose. You care deeply about proper abstraction, avoiding code duplication, and ensuring robust error handling. When suggesting changes, you explain the reasoning and often provide multiple options. You're security-conscious and always thinking about the broader system implications of changes. Your tone is professional but conversational, and you're not afraid to admit when you need clarification or made a mistake.

Recent Comments (65 total)

friday/#1323 Create a script to auto-populate .env file [view]
@taha-tf how to deal with this vulnerability? It was ignored in the osv-scanner.toml file.
agentql-apps/#454 fix classpass api key variable name [view]
Upgraded pip version in docker image to fix this vulnerability : https://github.com/tinyfish-io/agentql-apps/actions/runs/18854366449/job/53798459246
agentql-apps/#444 Fix CD errors · .github/workflows/container_CD.yml [view]
My bad. The PR summary and title is misleading and I forgot to give some context The github permissions issue was solved in aws-control-customer-apps repo. a healthy container is running on ecs. The port was changed because I defined Port 8000 in my ecs as it is a more commonly used port so I just changed it here. As for the aws cli command, the ih-aws --verbose was copied from ux-labs but I wa
agentql-apps/#444 Fix CD errors · .github/workflows/container_CD.yml [view]
I tested the CD on a test branch and it succeeded. Here is the run: https://github.com/tinyfish-io/agentql-apps/actions/runs/18700498262/job/53328005351
agentql-apps/#444 Fix CD errors · .github/workflows/container_CD.yml [view]
I defined the port to be 8000 here because it is more commonly used: https://github.com/tinyfish-io/aws-control-customer-apps/blob/main/modules/customer-apps/locals.tf That is why I changed the port in my image. The port change has nothing to do with fixing the CD. Apologies for mixing both of them. I can change it to 8001 in aws-control-customer-apps
agentql-apps/#444 Fix CD errors · .github/workflows/container_CD.yml [view]
okay thanks, I will switch to our company runner
agentql-apps/#438 Test CD trigger [view]
Test works successfully
agentql-apps/#437 Push image to ECR using CD · .github/workflows/container_CD.yml [view]
yes, i am just testing with sandbox. if it works, I will add prod too
ux-labs/#1565 Add 3 new endpoints to MCP · frontend/app/mcp/route.ts [view]
yes, addressed and tested. Works well, going to merge now
ux-labs/#1486 Add sync API · frontend/app/v1/automation/run/route.ts [view]
ok.
ux-labs/#1486 Add sync API · frontend/app/v1/automation/run/route.ts [view]
my bad
ux-labs/#1486 Add sync API · frontend/app/lib/openapi/spec.ts [view]
@paveldudka There are two ways we can get a 500 error: 1. Infrastructure errors: - Database failures - `createUserRun()` fails (can't write to DB) - Redis connection issues - Can't subscribe to run events - Queue failures - Can't enqueue the run for execution This returns a `apiErrorResponseSchema` 2. Run Fails: When EVA returns a 500 error. This returns the `automationRunErrorResponseSche
ux-labs/#1486 Add sync API · frontend/app/v1/schemas.ts [view]
in sucesss schema, error is always null and in failed schema, run_result is always null. but you are right, i can just unify both of them
ux-labs/#1486 Add sync API · frontend/app/lib/openapi/spec.ts [view]
@coderabbitai does this implementation make sense? give feedback on this new design implemented
ux-labs/#1486 Add sync API [view]
@paveldudka I added the endpoint to openapi spec. As far as the error is concerned there are two main issues right now. 1. Catching errors from eva : Currently, "Max steps" and "Max duration" reached error messages returning from eva are not being caught by ux-labs. They are considered as "Completed Runs". That is a bug that needs to be fixed in other PR. 2. Categorising Failed Runs : For this e