Skip to content

Upload to S3-compatible storage (Cloudflare R2) fails with getUTCFullYear error #4501

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
stefanpoensgen opened this issue Apr 17, 2025 · 7 comments

Comments

@stefanpoensgen
Copy link

stefanpoensgen commented Apr 17, 2025

Since upgrading to [email protected], I'm encountering an error when uploading the test results to Cloudflare R2 (S3-compatible). The performance tests complete successfully, but the upload phase fails with the following error:

TypeError: date.getUTCFullYear is not a function
    at dateToUtcString (/usr/src/app/node_modules/@smithy/smithy-client/dist-cjs/index.js:583:21)
    at headers (/usr/src/app/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:2843:129)
    ...

Any hints or ideas would be greatly appreciated — thanks a lot for your work and support!

@soulgalore
Copy link
Member

Hi @stefanpoensgen from that version we upgraded to @aws-sdk/client-s3 v3. I run min.io and seen that people have it working with S3, I wonder if there could be a setting that you use that trigger that error? Can you show me which S3 settings you use?

@stefanpoensgen
Copy link
Author

Hi @soulgalore,

Thank you for your quick reply.
I'm calling sitespeed from within a GitHub Action like this:

docker run -v "$(pwd)/.github/workflows/sitespeed:/config" sitespeedio/sitespeed.io:latest \
  --config /config/config.json /config/urls.txt \
  --resultBaseURL ${{ env.CLOUDFLARE_R2_DOMAIN }} \
  --s3.key ${{ env.CLOUDFLARE_R2_KEY }} \
  --s3.secret ${{ env.CLOUDFLARE_R2_SECRET }} \
  --s3.bucketname ${{ env.CLOUDFLARE_R2_BUCKET_PUBLIC }} \
  --s3.endpoint ${{ env.CLOUDFLARE_R2_ENDPOINT }} \
  --s3.region auto \
  --browsertime.requestheader ${{ env.CF_ACCESS_CLIENT_ID }} \
  --graphite.host ${{ env.GRAPHITE_HOST }}

@soulgalore
Copy link
Member

Ok thank looks ok, except I haven't seen "auto" before but maybe work? There where two breaking changes when we upgraded, one that you need to set region and the other that the endpoint needs to start with http/https. https://github.com/sitespeedio/sitespeed.io/blob/main/CHANGELOG.md#breaking-2

@stefanpoensgen
Copy link
Author

Region auto is a Cloudflare thing. https://developers.cloudflare.com/r2/api/s3/api/#bucket-region
With us-east-1 the error is the same. Endpoint is like https://<CF_ID>.r2.cloudflarestorage.com

@soulgalore
Copy link
Member

Ok, it looks like "Expires" has changed between versions. In the old version you could set a int or a date, in the new version it only works with date: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/s3/command/PutObjectCommand/

However we don't set expires automatically in sitespeed.io, you are sure you don't have that parameter?

I can make a fix later this week and check if its an integer and convert it (the old one looked like --s3.params.Expires=31536000 to let the objects live one year).

@soulgalore
Copy link
Member

I merged a fix in main if you use s3.params.Expires, I will release it later this weekend.

@stefanpoensgen
Copy link
Author

Thank you, it's working now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants