You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a blob-triggered Azure function, which is deployed on AKS with keda scaling based on the blob entries.
I used Azure/azure-functions-host#10624 to make each function accept only one blob item. The problem I have is that all created pods read the same file, but if I use queues-based triggers and scaling, different queue elements are read by different functions. According to my understanding, the blob trigger internally uses queues to do its tasks, so why is the behaviour different from having a blob trigger?
P.S.: I am moving the files to a different folder after the process is completed.
What I was able to find out was that each instance of the Azure Blob triggered function was creating a new queue and setting a lock in that, so is it possible to have a common queue for all? I think with that my issue should be solved.
I have a blob-triggered Azure function, which is deployed on AKS with keda scaling based on the blob entries.
I used Azure/azure-functions-host#10624 to make each function accept only one blob item. The problem I have is that all created pods read the same file, but if I use queues-based triggers and scaling, different queue elements are read by different functions. According to my understanding, the blob trigger internally uses queues to do its tasks, so why is the behaviour different from having a blob trigger?
P.S.: I am moving the files to a different folder after the process is completed.
host.json
keda
The text was updated successfully, but these errors were encountered: