Add support for per-request metadata/headers in GenerativeModel.generate_content #698
Labels
p3
status:triaged
Issue/PR triaged to the corresponding sub-team
type:feature request
New feature request/enhancement
Description of the feature request:
Currently, the Gemini Python SDK only allows setting metadata (headers) at the client configuration level through
default_metadata
. This makes it difficult to set different headers for different requests, particularly when integrating with services like Helicone that require user-specific headers.Other LLM client libraries (OpenAI, Anthropic) support setting headers on a per-request basis through parameters like
extra_headers
. Adding similar functionality to the Gemini SDK would improve flexibility and make it easier to integrate with proxy services.What problem are you trying to solve with this feature?
When using the Gemini API through a proxy service like Helicone, it's important to be able to set user-specific headers (like
Helicone-User-Id
) on a per-request basis. Currently, this requires creating a new client instance for each user or reconfiguring the client before each request, which is inefficient and cumbersome.Ideally, I would like to be able to do something like this:
This would allow setting different headers for different requests without having to reconfigure the client or create multiple client instances.
Any other information you'd like to share?
Proposed implementation:
The
GenerativeModel.generate_content
method could be modified to accept an additional parameter likemetadata
orextra_headers
that would be passed to the underlying client method. This would allow users to set headers on a per-request basis while maintaining backward compatibility.Example implementation:
This would provide a more consistent experience across different LLM client libraries and make it easier to integrate with proxy services.
The text was updated successfully, but these errors were encountered: