You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have noticed that using Basic LLM Chain node with a model (in this case Gemini 2.0) passing down binary image for vision analysis is extremely slow. Ie. during execution a lot time is spent in llm chain node.
For comparison we have done a rest call to Gemini REST api, with the very same set of images and the results are very unfavourable for llm chain + model nodes combo.
Time to process 2 images for llm chain + gemini node:
Basic LLM Chain node: 35181ms
Model node: 4790ms
Time spent on HTTP request node uploading the very same 2 images:
HTTP Request node: 5062ms
Something is really fishy here, both use the same system prompt asking LLM to extract markdown from a given image of document.
To Reproduce
Use included workflow to process couple of invoice images and attached test invoices.
Obviously, I cannot share the documents we are processing, but sample images show 3-7x more times spent in llm chain + model nodes combo vs http request for the same images.
gustaff-weldon
changed the title
Lang chain node calling out a model is extremly slow
Basic LLM Chain node calling out a model is extremly slow
Mar 12, 2025
Bug Description
We have noticed that using
Basic LLM Chain
node with a model (in this case Gemini 2.0) passing down binary image for vision analysis is extremely slow. Ie. during execution a lot time is spent in llm chain node.For comparison we have done a rest call to Gemini REST api, with the very same set of images and the results are very unfavourable for llm chain + model nodes combo.
Time to process 2 images for llm chain + gemini node:
Basic LLM Chain node: 35181ms
Model node: 4790ms
Time spent on HTTP request node uploading the very same 2 images:
HTTP Request node: 5062ms
Something is really fishy here, both use the same system prompt asking LLM to extract markdown from a given image of document.
To Reproduce
Use included workflow to process couple of invoice images and attached test invoices.
Obviously, I cannot share the documents we are processing, but sample images show 3-7x more times spent in llm chain + model nodes combo vs http request for the same images.
Extract_markdown_from_an_image.json
Expected behavior
We expected llm chain node to be equally performant to the direct REST api call to LLM API. In our tests, it is multiple times slower.
Operating System
ubuntu
n8n Version
1.80.4
Node.js Version
whatever n8n docker image comes with
Database
PostgreSQL
Execution mode
main (default)
The text was updated successfully, but these errors were encountered: