-
Notifications
You must be signed in to change notification settings - Fork 82
403 errors when delivering logs to AWS OpenSearch Serverless after a successful period #228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Is there a more detailed error behind this 403? In logstash-output-opensearch 2.0.2 I fixed #217 which would have produced this kind of error. Let's start with making sure the plugin version is updated? |
We're getting the same errors -- this and the cluster UUID error in opensearch-project/opensearch-build#186 We ensured that we're using the latest (2.0.2) version of the plugin. Has anyone got logstash-output-opensearch working against OpenSearch Serverless? What I'm trying to determine is if the problem is on our end (misconfiguration of OSS data access or network policies, etc) or if there's a problem with the plugin. Thanks! |
|
@chadmyers , The status code 403 indicates that the user making the request (as provided in the access key/secret key) does not have permissions to the resource. Enabling logging as @dblock suggested may yield some additional information. But, it appears that your user does not have permissions to write documents to the AOSS collection. |
@dlvenable This bug says that it works for a while, then stops with a 403. So the user has permissions. Similarly, in opensearch-project/opensearch-build#207 we also had a 403, but the problem was not permissions, but the incorrect signing in logstash-output-opensearch; so we need to know what that actual server-side error is that causes the first 403 and what actual data is being sent (to see if we can reproduce inserting that 1 record). |
Example debug log of a 403:
|
This is another 403 error we get when logstash is starting up - it tries to create the index template.
|
I'm following this Workshop from AWS to try to create a repro scenario of sorts, or at least to try to isolate my environment out of things as much as possible. This uses a script to generate a fake httpd.log that then uses the logstash-output-opensearch plugin to spew into Opensearch serverless. I went through this and I'm still getting 403's from my OSS collection. I edited my Data Access Policy to grant Here's my conf file:
(NOTE: when I set legacy_template => false, I get an error about And here's the logstash log output:
|
I added this inline policy to the IAM instance profile of my Cloud9 instance:
I waited a few minutes, then fired up logstash again, to no avail. I still get 403's on |
This is what I have in my Data Access Policy:
I don't know how I can open that up any more. The only other thing I can think of is that maybe the plugin isn't using the AWS IAM instance credentials of the EC2 instance this is running on? |
@dblock @dlvenable Have either of you (or anyone) know anyone who's got the logstash-output-opensearch plugin working with OSS? Because as far as I can tell, I've got the OSS data access policy open wide but I can't get the output plugin to return anything other than 403. The only other thing I can thing of is that it's a problem with Sigv4 somehow. Do you think it needs the IAM key and secret? |
I don't know of anyone who has tried ATM, but I will ask around. Since I'm familiar with the codebase, I will take a look, but I can't promise to do it fast. |
We definitely have gotten logstash working with OpenSearch Serverless. Here is a blog post from folks on the team: https://aws.amazon.com/blogs/big-data/migrate-your-indexes-to-amazon-opensearch-serverless-with-logstash/ |
@kkmr Thank you. There's an AWS Workshop that's very similar to that that I followed with no luck. The only things I can see that are different are:
Also, that was from January and I know they made changes to the permission scheme in OSS in May. I'm wondering if maybe the output plugin doesn't work on the latest OSS and/or there are additional requirements for IAM policies or data access policies on the OSS side now vs. January. Is your setup still working or was that a one-time migration that you did? |
Thank you! I also had some thoughts about some tests I can run to try to isolate whether there's an issue with the plugin vs. an issue with OSS data access policies. I'll see if I can try to isolate the problem. |
I'm ashamed/happy to admit that when doing the AWS workshop and Cloud9, the thing that was tripping me up was "Use Temporary Credentials" default. If you uncheck that, then it uses the EC2 instance profile of the Cloud9 server which is how I had configured the Data Access Policy for OSS. So I was able to get that workshop working which makes me thing that the problem was originally my Data Access Policy and then the false error of Cloud9 temporary credentials. I'm going to go back to my original/primary logstash environment and see if I can get it all working now that I've proven it CAN work and both the plugin and OSS work. I'll report back with specifics |
OK, it's now working in my primary logstash environment. I flailed around a lot yesterday so I'm not sure which particular thing fixed it, but I suspect it had to do with two things:
FYI - I still get the error about the cluster UUID but it seems harmless . My Data access policy had this rule:
FYI - Those "Cloud9" roles were for my Cloud9 experiment and aren't required. Also, I think only one is required but I'm not sure which, so I added both and it worked. You can remove the Cloud9 stuff if you're not messing with Cloud 9. And then the
So I think if you're getting 403 errors, there's something wrong/missing from your Data Access Policy. |
I'm glad you got it working @chadmyers. Thanks @kkmr for that link.
|
I will try again today and report back. Please hold back on closing until then. Thanks |
I can't easily test this as the
Would it be possible for someone on the project to publish an updated container image containing the latest version of the plugin on docker hub @dblock ? |
Steven Cherry above asked us to hold it open while he does some more testing, but I think at least we have proven that the logstash-output-opensearch plugin (at least v2.0.2) does/can work against OpenSearch Serverless as of 13-SEP-2023. I wanted to document my findings in this GH issue in case someone in the future is having the same problems I was having with 403s (I saw a few other re:Post and Stack Overflow posts about this so I don't think I'm the only one who struggled) so they know that it is possible and you just have to tweak your DAP most likely.
I can help, yes. I'm thinking of what would be useful here -- maybe mentioning the cluster UUID known-issue and the legacy_templates thing? And also a bit about "If you're getting 403 errors on calls to |
Yes please |
I don't know how to do that, so I opened #230. I think you should be able to update it inside the docker container too to test, but I am not sure how it's packaged in there and what that takes. If you do figure it out, please do post it here. |
@dblock I managed to try using version 2.0.2
Plus tried using static credentials,
But in both cases I still have the same problem as I started with
Still no further forward I'm afraid. |
@steven-cherry ok, start by locating the first error and extract the log (hopefully at debug level) from it? |
@steven-cherry Did you give up on this or ever made it work? I fixed the harmless uuid error in #237. |
@dblock no I gave up in the end |
Describe the bug
I'm attempting to deliver logs to AWS Opensearch Serverless. I'm running logstash as a deployment on AWS EKS. I'm attempting to using the IAM role to attached to the EKS EC2 node that's running the associated pod to authenticate with Opensearch serverless.
When I start the deployment/pod up it successfully delivers messages into Opensearch serverless, however after a short period, (20 seconds - 5 minutes) logs fail to be delivered to Opensearch serverless with 403 errors e.g.
[2023-08-31T11:18:47,585][ERROR][logstash.outputs.opensearch][main][43ac7955e25a1efb882bfe67309ff3cf447bfc3b85dc94a4119f84872473b07b] Encountered a retryable error (will retry with exponential backoff) {:code=>403, :url=>"[https://REDACTED.eu-west-1.aoss.amazonaws.com:443/_bulk ](https://REDACTED.eu-west-1.aoss.amazonaws.com/_bulk) ", :content_length=>52619}
If I stop the deployment/pod and start it again the process repeats itself. Logs can be delivered for a short period after which they are rejected with 403 errors.
My output config is as follows
To Reproduce
See Above
Expected behavior
Logs should be able to be delivered to Opensearch serverless consistently
Plugins
none
Screenshots
none
Host/Environment (please complete the following information):
Additional context
none
The text was updated successfully, but these errors were encountered: