Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ECR deployment auth issues from Docker #5163

Closed
aradwyr opened this issue Apr 28, 2020 · 10 comments
Closed

ECR deployment auth issues from Docker #5163

aradwyr opened this issue Apr 28, 2020 · 10 comments
Labels
needs-triage This issue or PR still needs to be triaged.

Comments

@aradwyr
Copy link

aradwyr commented Apr 28, 2020

None of the solutions from #2875 are working for me, so far I've tried various iterations of the aws ecr get-login-password command in a Docker image.

For ex:

 aws ecr get-login-password --region us-west-2 | docker login --password-stdin --username AWS "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.us-west-2.amazonaws.com"

Results in:

Unable to locate credentials. You can configure credentials by running "aws configure".
Error: Cannot perform an interactive login from a non TTY device

Vast majority of the time I'm encountering this error: no basic auth credentials but I'm only facing this error once I'm at the last step trying to push the image:

bitbucket-pipelines.yml file:

image: node:8-alpine

pipelines:
  default:
    - step:
        name: Build and push ECR image
        services:
          - docker
        script:
          - npm install
          - docker pull amazon/aws-cli:latest
          - alias aws='docker run --rm amazon/aws-cli'
          - eval $(aws ecr get-login-password --region us-west-2 | docker login --password-stdin --username AWS "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.us-west-2.amazonaws.com")
          - docker build -t $IMAGE_NAME .
          - docker tag <ecr_repo>:<tag> <image_uri>
          - docker push <image_uri>

I've even included aws configure in the process above but still the no basic auth credentials error remains:

           - aws configure set aws_access_key_id "${AWS_ACCESS_KEY_ID}"
           - aws configure set aws_secret_access_key "${AWS_SECRET_ACCESS_KEY}"

Version:

aws --version
aws-cli/2.0.10 Python/3.7.3 Linux/4.19.95-flatcar botocore/2.0.0dev14
@aradwyr aradwyr added the needs-triage This issue or PR still needs to be triaged. label Apr 28, 2020
@rpnguyen
Copy link
Contributor

Unable to locate credentials. You can configure credentials by running "aws configure".

alias aws='docker run --rm amazon/aws-cli'

Are credentials being passed to the aws-cli docker container? The CLI docs share a good way to do this via volume mounts (-v ~/.aws:/root/.aws). The credentials could also be passed through as environment variables (-e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY).

@aradwyr
Copy link
Author

aradwyr commented Apr 29, 2020

I've tried running with root access but always ran into TTY errors:

- alias aws='docker run --rm -ti -v ~/.aws:/root/.aws -v $(pwd):/aws amazon/aws-cli'
- docker run --rm -it amazon/aws-cli --version
#- aws --version

Both of the version checks return the following error: 
the input device is not a TTY

So I then removed the -ti flag to try and fix the TTY bug and ran

- alias aws='docker run --rm -v ~/.aws:/root/.aws -v $(pwd):/aws amazon/aws-cli'
- aws --version 

docker: Error response from daemon: authorization denied by plugin pipelines: -v only supports $BITBUCKET_CLONE_DIR and its subdirectories.
See 'docker run --help'.

Why wouldn't the aws configure set command work, is that not setting them as env vars?

@rpnguyen
Copy link
Contributor

Why wouldn't the aws configure set command work, is that not setting them as env vars?

This method of using the AWS CLI runs it in a container. Because it is run in a container, by default the CLI can't access the host file system, which includes configuration and credentials.

Running docker run --rm amazon/aws-cli configure set (without volume mounts) modifies the config file in the container's file system but not the host file system and so those changes are lost after the container exits. You can try this out by running:

docker run --rm amazon/aws-cli configure set default.region ap-southeast-2
docker run --rm amazon/aws-cli configure list

-v only supports $BITBUCKET_CLONE_DIR and its subdirectories.

Bitbucket pipelines doesn't support mounting arbitrary volumes. One alternative is to pass environment variables through to the container:

docker run --rm -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY amazon/aws-cli ...

@aradwyr
Copy link
Author

aradwyr commented Apr 29, 2020

So based off the docs, I should be able to run this command

aws ecr get-login-password \
    --region <region> \
| docker login \
    --username AWS \
    --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com

so long as I swap aws with docker run --rm amazon/aws-cli?

@rpnguyen
Copy link
Contributor

Yes. For the most part, the CLI binary can be replaced with the containerized CLI so long as credentials and configuration are correctly passed through to the container via either volume mounts (-v) or environment variables (-e). More info in these docs.

Since Bitbucket Pipelines doesn't support volume mounts, you'll need to use environment variables to pass the credentials and configuration to the container. Assuming that the environment variables ${AWS_ACCESS_KEY_ID} and ${AWS_SECRET_ACCESS_KEY} are available, it might look something like this:

alias aws='docker run --rm -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY amazon/aws-cli'
aws ecr get-login-password \
    --region <region> \
| docker login \
    --username AWS \
    --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com

@aradwyr
Copy link
Author

aradwyr commented May 5, 2020

- alias aws='docker run --rm -e $AWS_ACCESS_KEY_ID -e $AWS_SECRET_ACCESS_KEY amazon/aws-cli'

- aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin <aws-acct>.dkr.ecr.<region>.amazonaws.com

After running the above, I got this error:

/opt/atlassian/pipelines/agent/tmp/shellScript3944771105370363657.sh: line 13: can't open region: no such file
Unable to locate credentials. You can configure credentials by running "aws configure".

@rpnguyen
Copy link
Contributor

rpnguyen commented May 6, 2020

Remove the dollar signs from the docker run command. See docker run docs.

alias aws='docker run --rm -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY amazon/aws-cli'
aws ecr get-login-password \
    --region <region> \
| docker login \
    --username AWS \
    --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com

@aradwyr
Copy link
Author

aradwyr commented May 6, 2020

Oh those are repository variables set and encrypted in Bitbucket: https://confluence.atlassian.com/bitbucket/variables-in-pipelines-794502608.html

Maybe this is more of a limitation from Bitbucket pipelines rather than an aws-cli issue (I'm not entirely sure). There's also this thread and this too but no luck with any of these solutions.

Here's the source code for the pipe: https://bitbucket.org/atlassian/aws-ecr-push-image/src/master/

@rpnguyen
Copy link
Contributor

Unable to reproduce the issue https://bitbucket.org/rpnguyen/aws-cli-5163/addon/pipelines/home#!/results/1 It's most likely a bug in the pipeline script or the CLI configuration.

I recommend that this issue be closed unless new debug logs turn up which indicate a CLI issue.

@aradwyr aradwyr closed this as completed May 13, 2020
@hamza-saqib
Copy link

None of the solutions from #2875 are working for me, so far I've tried various iterations of the aws ecr get-login-password command in a Docker image.

For ex:

 aws ecr get-login-password --region us-west-2 | docker login --password-stdin --username AWS "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.us-west-2.amazonaws.com"

Results in:

Unable to locate credentials. You can configure credentials by running "aws configure".
Error: Cannot perform an interactive login from a non TTY device

Vast majority of the time I'm encountering this error: no basic auth credentials but I'm only facing this error once I'm at the last step trying to push the image:

bitbucket-pipelines.yml file:

image: node:8-alpine

pipelines:
  default:
    - step:
        name: Build and push ECR image
        services:
          - docker
        script:
          - npm install
          - docker pull amazon/aws-cli:latest
          - alias aws='docker run --rm amazon/aws-cli'
          - eval $(aws ecr get-login-password --region us-west-2 | docker login --password-stdin --username AWS "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.us-west-2.amazonaws.com")
          - docker build -t $IMAGE_NAME .
          - docker tag <ecr_repo>:<tag> <image_uri>
          - docker push <image_uri>

I've even included aws configure in the process above but still the no basic auth credentials error remains:

           - aws configure set aws_access_key_id "${AWS_ACCESS_KEY_ID}"
           - aws configure set aws_secret_access_key "${AWS_SECRET_ACCESS_KEY}"

Version:

aws --version
aws-cli/2.0.10 Python/3.7.3 Linux/4.19.95-flatcar botocore/2.0.0dev14

there could be one issue that env variables you have defined in your GitLab/Github are for protect branch only and you are running your yaml file on non protect branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage This issue or PR still needs to be triaged.
Projects
None yet
Development

No branches or pull requests

4 participants
@rpnguyen @aradwyr @hamza-saqib and others