-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: ai-proxy
does not correctly escape AWS Bedrock inference profile ARN in signature and request
#14309
Comments
I have attempted a fix in PR #14310. While going through the source code I noticed that the plugin allows for setting and overriding the Another possible way would be to allow the inference profile by recommending the |
ai-proxy
does not correctly escape AWS Bedrock inference profile ARN in signature and request
Hey @muscionig thanks so much for this, I actually hadn't seen that you can just specify the model ARN directly for Bedrock-converse, I thought it was just for InvokeModel, so I didn't check it. But yeah I had this same problem with "upstream_path" which is:
I will either ask if your fix can be merged directly, or I will bundle it into the next 3.10 main and tag you as contributor (because we have a giant PR of fixes coming already, it's quicker). |
Hi @tysoekong, thanks for the update and for considering my changes! I’m totally fine with the fix being integrated into a different PR and with the collaborator approach. Is the PR public? I’d love to test it in the meantime to ensure everything works as expected. |
Hey @muscionig I'm not sure what you've found is entirely the problem. When I have fixed URL escaping in just the profile ARN:
I still get the "no candidates received". Adding a pre-function to log the output, Bedrock is returning: {"Output":{"__type":"com.amazon.coral.service#UnknownOperationException"},"Version":"1.0"} I think that this SHOULD work even without any of the escaping (i.e. setting model.name to |
Hi @ttyS0e, It’s definitely possible there’s more to this than what I’ve found. Here’s how I debugged it:
I also found a useful example of how My hypothesis is that Kong is trying to route after the EDIT: looking at your logs, I would try to escape the remaining |
Is there an existing issue for this?
Kong version (
$ kong version
)Kong 3.9.0
Current Behavior
When using AWS Bedrock inference profiles, which are formatted as:
arn:aws:bedrock:us-east-1:<account_id>:application-inference-profile/<profile_id>
,kong
is unable to route the request.I have experience two behaviors:
ai-proxy
is configured with the unescaped ARN, Kong fails with:arn%3Aaws%3Abedrock%3Aus-east-1%3A<account_id>%3Aapplication-inference-profile%2F<profile_id>
), Kong fails with:Expected Behavior
Kong should correctly format and escape the ARN when generating the SigV4 signature and constructing the request URL to match AWS’s expected format.
Converse
API requires the URL to be formatted as:Steps To Reproduce
Create the plugin with the unescaped
model_name
ARNCreate the plugin with the escaped
model_name
ARNsigv4
error.Anything else?
aws cli
with the inference profile to rule out authentication errors with the inference profile, and the profile works using the converse endpoint./dev/stdout
.The text was updated successfully, but these errors were encountered: