vitest
seems to be an issue now that we have code agents
#7818
Labels
vitest
seems to be an issue now that we have code agents
#7818
Clear and concise description of the problem
vitest
on its own isvitest watch
which I have always thought was great.However, with so much ai agent support now I have noticed it can be an issue if you are allowing the agent to run tests etc. In an agent flow a watch isn't great and I have found it lead to issues for obvious reasons.
Obviously the answer is to make your
test
script in package.json usevitest run
instead but I feel like run may be a better default now?Like I said, I love that watch is default, but now Im starting to dip toe in ai assist it feels wrong as default because it can block the ai flow.
Suggested solution
Change default to
vitest run
so ai agents especially in yolo mode actually get the signal that tests have completed.Alternative
Leave as is.
This is just an observation from someone who isn't even using AI heavily right now but feels like it interupts the ai flow.
Additional context
Sometimes especially in yolo mode agent can run test to check if code working and if it defaults to watch then it can get hung up.
Validations
The text was updated successfully, but these errors were encountered: