You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, using clearml-agent daemon --stop terminates the agent abruptly and aborts the running task.
I’d like a way to ensure the agent finishes its current task and then stops without picking up new tasks from the queue.
This would be useful for scenarios where I need to free up resources (e.g., a GPU) for manual work without disrupting ongoing jobs.
I tried one approach: setting agent.reload_config: true in the configuration file, hoping the agent would reload its config between tasks. Then, when I need the agent to stop, I’d define agent.downtime in the config to pause it. However, this doesn’t seem to work—the config isn’t reloaded dynamically between tasks as I thought.
I also tried modifying the queues of a worker via API. My idea was that if I could remove the default queue or exchange it for some dummy queue with no tasks then the worker would naturally stop after it has finished the current task.
But it seems that it is not possible either.
The text was updated successfully, but these errors were encountered:
Currently, using
clearml-agent daemon --stop
terminates the agent abruptly and aborts the running task.I’d like a way to ensure the agent finishes its current task and then stops without picking up new tasks from the queue.
This would be useful for scenarios where I need to free up resources (e.g., a GPU) for manual work without disrupting ongoing jobs.
I tried one approach: setting
agent.reload_config: true
in the configuration file, hoping the agent would reload its config between tasks. Then, when I need the agent to stop, I’d defineagent.downtime
in the config to pause it. However, this doesn’t seem to work—the config isn’t reloaded dynamically between tasks as I thought.I also tried modifying the queues of a worker via API. My idea was that if I could remove the default queue or exchange it for some dummy queue with no tasks then the worker would naturally stop after it has finished the current task.
But it seems that it is not possible either.
The text was updated successfully, but these errors were encountered: