Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Another Memory Leak in next/image #44685

Open
1 task done
jaunruh opened this issue Jan 7, 2023 · 32 comments
Open
1 task done

Another Memory Leak in next/image #44685

jaunruh opened this issue Jan 7, 2023 · 32 comments
Labels
bug Issue was opened via the bug report template. linear: next Confirmed issue that is tracked by the Next.js team.

Comments

@jaunruh
Copy link

jaunruh commented Jan 7, 2023

Verify canary release

  • I verified that the issue exists in the latest Next.js canary release

Provide environment information

Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 22.2.0: Fri Nov 11 02:03:51 PST 2022; root:xnu-8792.61.2~4/RELEASE_ARM64_T6000
Binaries:
Node: 19.0.1
npm: 9.2.0
Yarn: 1.22.19
pnpm: 7.21.0
Relevant packages:
next: 13.1.1
eslint-config-next: 13.1.1
react: 18.2.0
react-dom: 18.2.0

Which area(s) of Next.js are affected? (leave empty if unsure)

Image optimization (next/image, next/legacy/image)

Link to the code that reproduces this issue

https://github.com/jaunruh/next-image-test

To Reproduce

Set up docker image:

  • docker build --no-cache -t next-test .
  • docker run -p 3000:3000 --name next-test next-test

Surf the app:

  • open browser at localhost:3000
  • scroll the home page and the project pages randomly
  • use the Resource usage extension to monitor the memory usage
  • see the memory only increase and never decrease

Describe the Bug

The apps memory usage only ever increases and never really decreases anymore. On docker I have easily managed to pass 1gb in memory usage.
The app was run on digitalocean as a bare nodejs app and as a docker container locally and in digitalocean.
The memory usage in digitalocean (1 CPU, 512mb memory) looks as follows, but can also be reproduced with docker:

image

In the provided repo there are also commented-out image tags. As soon as these are used the problem disappears.

Expected Behavior

No continuous memory increase.

Which browser are you using? (if relevant)

All major browsers

How are you deploying your application? (if relevant)

docker run and build

NEXT-2023

@jaunruh jaunruh added the bug Issue was opened via the bug report template. label Jan 7, 2023
@remvn
Copy link

remvn commented Jan 10, 2023

I tested your repo and had no memory leak when running it as a bare node.js server. However, when I ran it on Docker (using WSL2 and Docker on Windows), I encountered a memory leak. It could be because of this issue: wsl 2 memory leak issue (I experienced a massive memory leak of around 11 GB).

Memory spike a little bit (your image is quite big and NextJS optimize image before send it to client)

image

Memory did decrease:

image

@jaunruh
Copy link
Author

jaunruh commented Jan 11, 2023

So as mentioned above I was using Digital Ocean Apps or Docker for deployment. The assumption that Digital Ocean Apps use docker to deploy bare nodejs repos should be a fair one. Locally on macOS there does not seem to be any memory leak. I have not tried this on any windows machine. I have updated the repo to use ubuntu instead of debian. But the issue persists. It does not seem to be distro specific.

@Mikroser
Copy link

Mikroser commented Feb 2, 2023

Try setting this in next.config.js. I hope it solves the problem

Module.exports = {
images: {
unoptimized: true,
},
}

@jaunruh
Copy link
Author

jaunruh commented Feb 2, 2023

I have seen this option. I am going to try it in my repo. But even if it solves the memory leak problem, it creates the issue of unoptimized images. I can then just use a general <img/> tag instead. No need for next/image anymore.

@jaunruh
Copy link
Author

jaunruh commented Feb 21, 2023

image

Generally using unoptimized images seems to fix the memory leak. But well, the images are unoptimized... so I still think this is a bug that needs to be fixed.

@khoramshahy
Copy link

khoramshahy commented Mar 7, 2023

I have the same issue with using nextjs Image. I am using nextjs 13.1.1 and docker container will be killed.
my question is that if the solution is setting the images unoptimized, so is there still any benefit of using nextjs Image instead of usual Html img?

@jaunruh
Copy link
Author

jaunruh commented Mar 27, 2023

Persists in next 13.2.4.

@caprica
Copy link

caprica commented May 13, 2023

I am seeing similar in 13.4.1.

Deployed a small website to Digital Ocean that uses RSC and Image. There's maybe 50 mid-size images total in this entire web-site.

In the screenshot, after the black line is after image optimisation disabled in config. The other dips in the graph before that point are due to server restarts.

image

As per a warning in the NextJS documentation I am using jemalloc in my deployment.

@swedpaul
Copy link

swedpaul commented Jul 4, 2023

image

Persist in 13.4.7.

Three solutions worked:

  1. add to next.config.js
images: {
 unoptimized: true
},
  1. Change all next.js <Image> to <img>
  2. Add to all next.js <Image> param unoptimized

@alan345
Copy link

alan345 commented Nov 9, 2023

Same issue for us.
image

We are using AWS Fargate (ECR, ECS). And yes it is using a docker image.

This solve it, but yes, the images are not optimized anymore.

images: {
 unoptimized: true
},

By reading your comments, I really think the issue is with Docker and next/image

@sergio-milu
Copy link

same issue here with NextJS 13.5.2, we are running out of memory in a docker with 1 GB of ram (usually ram usage is between 10%-20%)

@styfle styfle added the linear: next Confirmed issue that is tracked by the Next.js team. label Jan 8, 2024
@rusakovic
Copy link

One year passed. Still open. Have the same image memory leak but for opengraph

@styfle
Copy link
Member

styfle commented Feb 7, 2024

The memory leak seems to only be present when running in ubuntu:jammy base image (glibc).

I don't see the memory leak when using the the recommended node:18-alpine base image (musl).

This is likely related to sharp and documented here:

@rusakovic
Copy link

The memory leak seems to only be present when running in ubuntu:jammy base image (glibc).

I don't see the memory leak when using the the recommended node:18-alpine base image (musl).

This is likely related to sharp and documented here:

Thank you for your response!
I don't use Docker preset in my project. But I use coolify coolify.io that uses Docker, for sure.
My project started on Node 20.
This instruction is also was done: https://sharp.pixelplumbing.com/install#linux-memory-allocator

Maybe I should try standalone distribution with DOcker, if I self-hosted

@CeamKrier
Copy link

The memory leak seems to only be present when running in ubuntu:jammy base image (glibc).

I don't see the memory leak when using the the recommended node:18-alpine base image (musl).

This is likely related to sharp and documented here:

downgrading the node to 18-alpine resolved the issue for me. Thanks!

@Jee-vim
Copy link

Jee-vim commented Mar 1, 2024

same here, i try downgrade from 20 to 18-alpine and is work

gillwong added a commit to GTD-IT-XXIV/gtd-xxvi-website that referenced this issue Mar 12, 2024
Image optimization causes high memory usage, [source](vercel/next.js#44685).
Temporarily disabling this until a fix is found. Manually compress large
images using [Sharp CLI](https://github.com/GTD-IT-XXIV/gtd-xxvi-website?tab=readme-ov-file#sharp-cli).
gillwong added a commit to GTD-IT-XXIV/gtd-xxvi-website that referenced this issue Mar 12, 2024
Image optimization causes high memory usage, [source](vercel/next.js#44685).
Temporarily disabling this until a fix is found. Manually compress large
images using [Sharp CLI](https://github.com/GTD-IT-XXIV/gtd-xxvi-website?tab=readme-ov-file#sharp-cli).
gillwong added a commit to GTD-IT-XXIV/gtd-xxvi-website that referenced this issue Mar 13, 2024
Image optimization causes high memory usage, [source](vercel/next.js#44685).
Temporarily disabling this until a fix is found. Manually compress large
images using [Sharp CLI](https://github.com/GTD-IT-XXIV/gtd-xxvi-website?tab=readme-ov-file#sharp-cli).
@art-alexeyenko
Copy link

The memory leak persists on node 18 alpine, here's the scenario for new nextjs app:

  1. Create nextjs-latest app
  2. Take a large image (~10MB for my case) and put it into public folder
  3. Render that image with next/image on home page
  4. deploy this to docker with node:18.20-alpine
  5. Open home page - this will cause a memory spike to ~2GB. After a bit it will fall to ~1GB and stay there
  6. Open the page in incognito tab and keep refreshing the main tab and the incognito - this will cause a second memory spike and memory usage will remain that way.
    A handy demo in case it's needed:
    https://github.com/art-alexeyenko/next-image-oom

And some memory usage trends from my local tests with Docker Desktop:
image

@styfle
Copy link
Member

styfle commented Apr 18, 2024

That repo is missing sharp.

You need to run “npm install sharp” for production.

styfle added a commit that referenced this issue Apr 25, 2024
…ependency (#63321)

## History

Previously, we added support for `squoosh` because it was a wasm
implementation that "just worked" on all platforms when running `next
dev` for the first time. However, it was slow so we always recommended
manually installing `sharp` for production use cases running `next
build` and `next start`.

Now that [`sharp` supports
webassembly](https://sharp.pixelplumbing.com/install#webassembly), we no
longer need to maintain `squoosh`, so it can be removed. We also don't
need to make the user install sharp manually because it can be installed
under `optionalDependencies`. I left it optional in case there was some
platform that still needed to manually install the wasm variant with
`npm install --cpu=wasm32 sharp` such as codesandbox/stackblitz (I don't
believe sharp has any fallback built in yet).

Since we can guarantee `sharp`, we can also remove `get-orientation` dep
and upgrade `image-size` dep.

I also moved an [existing `sharp`
test](#56674) into its own fixture
since it was unrelated to image optimization.

## Related Issues
- Fixes #41417
- Related #54670
- Related #54708
- Related #44804
- Related #48820
- Related #61810
- Related #61696
- Related #44685
- Closes #64362

## Breaking Change

This is a breaking change because newer versions of `sharp` no longer
support `yarn@1`.

- lovell/sharp#3750

The workaround is to install with `yarn --ignore-engines` flag.

Also note that Vercel no longer defaults to yarn when no lockfile is
found

- vercel/vercel#11131
- vercel/vercel#11242

Closes NEXT-2823
@Innei
Copy link

Innei commented May 7, 2024

@styfle Maybe it has nothing to do with sharp, I'm using standalone build and I have sharp installed on my system and also defined the path to sharp in env.

But I monitored that the memory is still rising continuously, then suddenly in a few seconds, the system lost response, at this time through the cloud platform monitoring observed that the CPU 100%, memory 100%, IO Read 100%. But the system didn't trigger the OOM killer which is very strange.

node -v
v22.0.0

CleanShot 2024-05-07 at 12  03 40@2x

@Innei
Copy link

Innei commented May 7, 2024

Hi there, I just dumped a memory dump file. Here's what the memory stack looks like after it's been running for a while.

CleanShot 2024-05-07 at 6  48 51@2x

This was shortly after launch.

CleanShot 2024-05-07 at 6  49 00@2x

As you can see from the following dump, it's the ImageResponse-related modules that are leaking memory.

CleanShot 2024-05-07 at 6  49 54@2x
CleanShot 2024-05-07 at 6  50 09@2x

@Innei
Copy link

Innei commented May 7, 2024

ImageResponse and FigmaImageResponse that means @vercel/og causes memory leak?

@Six6pounder
Copy link

The memory leak seems to only be present when running in ubuntu:jammy base image (glibc).
I don't see the memory leak when using the the recommended node:18-alpine base image (musl).
This is likely related to sharp and documented here:

Thank you for your response! I don't use Docker preset in my project. But I use coolify coolify.io that uses Docker, for sure. My project started on Node 20. This instruction is also was done: https://sharp.pixelplumbing.com/install#linux-memory-allocator

Maybe I should try standalone distribution with DOcker, if I self-hosted

Did you find a solution for coolify? The issue is still present today

@chipcop106
Copy link

Here is our memory chart when using the Next 14.2.3. I confirmed it has a memory leak issue with open graph-image and twitter-image. From 27/6, we disable this function, and this chart becomes normal.
image

Solution

Try not to use dynamically generated metadata images using tsx,js,ts.

ForsakenHarmony pushed a commit that referenced this issue Aug 16, 2024
…ependency (#63321)

## History

Previously, we added support for `squoosh` because it was a wasm
implementation that "just worked" on all platforms when running `next
dev` for the first time. However, it was slow so we always recommended
manually installing `sharp` for production use cases running `next
build` and `next start`.

Now that [`sharp` supports
webassembly](https://sharp.pixelplumbing.com/install#webassembly), we no
longer need to maintain `squoosh`, so it can be removed. We also don't
need to make the user install sharp manually because it can be installed
under `optionalDependencies`. I left it optional in case there was some
platform that still needed to manually install the wasm variant with
`npm install --cpu=wasm32 sharp` such as codesandbox/stackblitz (I don't
believe sharp has any fallback built in yet).

Since we can guarantee `sharp`, we can also remove `get-orientation` dep
and upgrade `image-size` dep.

I also moved an [existing `sharp`
test](#56674) into its own fixture
since it was unrelated to image optimization.

## Related Issues
- Fixes #41417
- Related #54670
- Related #54708
- Related #44804
- Related #48820
- Related #61810
- Related #61696
- Related #44685
- Closes #64362

## Breaking Change

This is a breaking change because newer versions of `sharp` no longer
support `yarn@1`.

- lovell/sharp#3750

The workaround is to install with `yarn --ignore-engines` flag.

Also note that Vercel no longer defaults to yarn when no lockfile is
found

- vercel/vercel#11131
- vercel/vercel#11242

Closes NEXT-2823
@carlos-dubon
Copy link

carlos-dubon commented Jan 28, 2025

It is indeed an issue with next/og/@vercel/og. Since we started using it our memory has been growing steadily...

"next": "15.1.6",

@carlos-dubon
Copy link

can we re-open #65451?

cc: @leerob (idk who to tag 😅)

@Innei
Copy link

Innei commented Jan 28, 2025

can we re-open #65451?

cc: @leerob (idk who to tag 😅)

cc @huozhi

@yekta
Copy link

yekta commented Feb 13, 2025

It is indeed an issue with next/og/@vercel/og. Since we started using it our memory has been growing steadily...

"next": "15.1.6",

I'm having the same issue. Using next/og and Next 15.1.6 on an ARM machine.

@carlos-dubon
Copy link

@huozhi 🙏

@carlos-dubon
Copy link

do we need to open another issue? @huozhi

@yekta
Copy link

yekta commented Feb 19, 2025

For reference, I downgraded to 15.1.0 and the next/og memory leak issue seems to be gone.

Can we get some help here? @leerob

EDIT: Nope, problem is back even on 15.1.0.

@VadimOnix
Copy link

Hello! The issue that most people consider to be a leak is actually an accumulation of the file system cache in Linux.

If you examine the Nextjs caching algorithm in detail, you will see that it does not remove files that have already "expired" based on the expiredAt value. The Linux operating system, to optimize file access, tries to keep metadata (inodes) in RAM.

We observe this behavior because most often our application is deployed in an isolated container in a production environment on a cluster with large memory resources, and there are no parallel processes putting pressure on the RAM; as a result, the memory tries to fill up until it starts affecting performance.

I have created a script that automatically cleans up some of the cache files—you can give it a try.
Link: https://github.com/VadimOnix/next-image-cache-cleaner

If this helps you, I would be grateful for your star on my repository ✌️

@yekta
Copy link

yekta commented Feb 23, 2025

Hello! The issue that most people consider to be a leak is actually an accumulation of the file system cache in Linux.

If you examine the Nextjs caching algorithm in detail, you will see that it does not remove files that have already "expired" based on the expiredAt value. The Linux operating system, to optimize file access, tries to keep metadata (inodes) in RAM.

We observe this behavior because most often our application is deployed in an isolated container in a production environment on a cluster with large memory resources, and there are no parallel processes putting pressure on the RAM; as a result, the memory tries to fill up until it starts affecting performance.

I have created a script that automatically cleans up some of the cache files—you can give it a try. Link: https://github.com/VadimOnix/next-image-cache-cleaner

If this helps you, I would be grateful for your star on my repository ✌️

I'm running it with unoptimized: true. The RAM usage still keep rising. My guess is OG images created by next/og aren't being cleaned up as well. I think source of the issue is not relevant though. NextJS could let us configure that as well since most people hosting with Node don't want their RAM usage to constantly rise regardless of how much RAM they have. I'm not sure if they do let you configure this already. I looked and couldn't find anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issue was opened via the bug report template. linear: next Confirmed issue that is tracked by the Next.js team.
Projects
None yet
Development

No branches or pull requests