-
Notifications
You must be signed in to change notification settings - Fork 889
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Cuda][Rocm?] memory leak in Ipc cache handling #12849
Comments
the cudaIPC does not allocate memory but instead maps the memory of another GPU onto the local process memory space (for GPU access). Thus, while the memory use seems to increase, it is mostly an accountability quirk, the amount of accessible memory increases but not the physical memory used. For performance reasons it is not uncommon to cache the IPC handles such that no call to |
I agree that shouldn’t be an issue in principle, however when I check with nvidia smi the actual memory usage is growing by similar amount to the active cuIpc handles, and weirdly enough disabling the ipc remove the leak. For precision the leak is observed using nvidia-smi initially, it just happens to be correlated in amount and location with cuIpc. And disabling ipc in OpenMPi resolve the issue, so while I agree that it shouldn’t be the case, for some odd reasons here cuIpc seems to be related to the issue. Also it is important to note that the memory used for the communication is systematically freed after the communication step, therefore the memory exposed to cuIpc should be freed and the handle released. |
I managed to get somewhat of a reproducer (in sycl though but it is transparent to cuda).
Here is the source code : // ./.env/acpp-installdir/bin/acpp -O3 -isystem /usr/lib/x86_64-linux-gnu/openmpi/include -isystem /usr/lib/x86_64-linux-gnu/openmpi/include/openmpi -lmpi test.cpp && mpirun -n 4 ./a.out
#include <sycl/sycl.hpp>
#include <mpi.h>
#include <random>
#include <stdio.h>
#define WSIZE 4
std::vector<sycl::device> get_sycl_device_list() {
std::vector<sycl::device> devs;
const auto &Platforms = sycl::platform::get_platforms();
for (const auto &Platform : Platforms) {
const auto &Devices = Platform.get_devices();
for (const auto &Device : Devices) {
devs.push_back(Device);
}
}
return devs;
}
sycl::queue get_queue(int local_rank) {
auto d = get_sycl_device_list();
return sycl::queue{d[local_rank + 1]};
}
void print_nvsmi(int world_rank) {
if (world_rank == 0) {
system("nvidia-smi");
}
}
void wait() { system("sleep 0.1"); }
void run_the_test(int world_rank, int world_size, sycl::queue &q) {
std::mt19937 gen(111);
std::uniform_int_distribution<> distrib(1, 500000000);
for (int tcount = 0; tcount < 1000; tcount++) {
std::vector<std::array<int, 3>> comm_map = {
// sender receiver size
{0, 1, distrib(gen)},
{0, 2, distrib(gen)},
{0, 3, distrib(gen)},
{1, 0, distrib(gen)},
{1, 2, distrib(gen)},
{1, 3, distrib(gen)},
{2, 0, distrib(gen)},
{2, 1, distrib(gen)},
{2, 3, distrib(gen)},
{3, 1, distrib(gen)},
{3, 2, distrib(gen)},
{3, 3, distrib(gen)},
};
std::vector<char *> send_msgs;
printf(" -- alloc\n");
for (int i = 0; i < comm_map.size(); i++) {
int len = comm_map[i][2];
send_msgs.push_back(sycl::malloc_device<char>(len, q));
}
std::vector<MPI_Request> rqs;
printf(" -- comm\n");
for (int i = 0; i < comm_map.size(); i++) {
int tag = i;
int sender = comm_map[i][0];
int receiver = comm_map[i][1];
int len = comm_map[i][2];
char *ptr = send_msgs[i];
if (sender == world_rank) {
rqs.push_back(MPI_Request{});
int rq_index = rqs.size() - 1;
auto &rq = rqs[rq_index];
MPI_Isend(ptr, len, MPI_BYTE, receiver, tag, MPI_COMM_WORLD, &rq);
}
}
for (int i = 0; i < comm_map.size(); i++) {
int tag = i;
int sender = comm_map[i][0];
int receiver = comm_map[i][1];
int len = comm_map[i][2];
char *ptr = send_msgs[i];
if (receiver == world_rank) {
rqs.push_back(MPI_Request{});
int rq_index = rqs.size() - 1;
auto &rq = rqs[rq_index];
MPI_Irecv(ptr, len, MPI_BYTE, sender, tag, MPI_COMM_WORLD, &rq);
}
}
std::vector<MPI_Status> st_lst(rqs.size());
MPI_Waitall(rqs.size(), rqs.data(), st_lst.data());
printf(" -- free\n");
for (int i = 0; i < comm_map.size(); i++) {
sycl::free(send_msgs[i], q);
}
wait();
print_nvsmi(world_rank);
wait();
}
}
int main(void) {
// Initialize the MPI environment
MPI_Init(NULL, NULL);
// Get the number of processes
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
if (WSIZE != world_size) {
printf("the world size should be 4");
return 1;
}
// Get the rank of the process
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
// Print off a hello world message
printf(
"Hello world from processor %s, rank %d out of %d processors\n",
processor_name,
world_rank,
world_size);
sycl::queue q = get_queue(world_rank);
run_the_test(world_rank, world_size, q);
// Finalize the MPI environment.
MPI_Finalize();
} It is important to note here that using a fixed size for the communications fixes the issue (the memory usage saturates around +600MiB after a few loop), whereas using instead a random size result in continuous growth until crash. |
@tdavidcl thanks for reporting. I'm seeing the exact problem using my own application using the same kind of communication pattern using Using Nvidia's ========= Leaked 31,850,624 bytes at 0x14a930000000
========= Saved host backtrace up to driver entry point at allocation time
========= Host Frame: [0x292640]
========= in /lib64/libcuda.so.1
========= Host Frame:uct_cuda_ipc_map_memhandle in cuda_ipc/cuda_ipc_cache.c:278 [0xd1fd]
========= in /ccc/products/ucx-1.15.0/system/default/lib/ucx/libuct_cuda.so.0
========= Host Frame:uct_cuda_ipc_ep_get_zcopy in cuda_ipc/cuda_ipc_ep.c:146 [0xbaeb]
========= in /ccc/products/ucx-1.15.0/system/default/lib/ucx/libuct_cuda.so.0
========= Host Frame:ucp_rndv_progress_rma_zcopy_common in rndv/rndv.c:586 [0x2bd5ed]
========= in
========= Host Frame:ucp_rndv_req_send_rma_get in rndv/rndv.c:954 [0x2be4bb]
========= in
========= Host Frame:ucp_rndv_receive in rndv/rndv.c:1684 [0x2c09ac]
========= in
========= Host Frame:ucp_tag_rndv_process_rts in tag/tag_rndv.c:46 [0x2d4e6c]
========= in
========= Host Frame:uct_mm_iface_progress in sm/mm/base/mm_iface.c:379 [0x184bb]
========= in /ccc/products/ucx-1.15.0/system/default/lib/libuct.so.0
========= Host Frame:ucp_worker_progress in core/ucp_worker.c:2889 [0xffffffffbb1ff709]
========= in
========= Host Frame:opal_progress in runtime/opal_progress.c:231 [0x2b3cb]
========= in /ccc/products/openmpi-4.1.6.15/gcc--11.1.0/default/lib/libopen-pal.so.40
========= Host Frame:ompi_request_default_wait_all in request/req_wait.c:234 [0x53e94]
========= in /ccc/products/openmpi-4.1.6.15/gcc--11.1.0/default/lib/libmpi.so.40
========= Host Frame:PMPI_Waitall in /tmp/openmpi/4.1.6.15/GCCcore-11.1.0cuda-12.2/openmpi-4.1.6.15.0/ompi/mpi/c/profile/pwaitall.c:80 [0x9553e]
========= in /ccc/products/openmpi-4.1.6.15/gcc--11.1.0/default/lib/libmpi.so.40
========= Host Frame:kanop::MeshGhostsExchanger<3, double, Kokkos::Device<Kokkos::Cuda, Kokkos::CudaSpace> >::do_mpi_send_recv_inplace(kanop::DataArrayGhostedBlock<3, double, Kokkos::Device<Kokkos::Cuda, Kokkos::CudaSpace> >) in /ccc/dsku/nfs-server/user/cont001/ocre/kestenerp/WORKDIR/install/github/kanop/src/shared/MeshGhostsExchanger.h:746 [0x1ac912] Disabling cuda ipc, as suggested above, makes the leak go away. |
Any update on this issue ? This does affect production quite significantly .... |
I suggest trying hpcx-mpi from inside the nvdia compiler, I don't see the memory issue if using that one. |
Background information
Using openmpi direct GPU communication with CUDA, a memory growth is observed throughout the duration of the run.
After tracing the issue using the ompi logs
--mca mpi_common_cuda_verbose 10
there appears to be a mismatch between the calls tocuIpcOpenMemHandle
andcuIpcCloseMemHandle
(wherecuIpcCloseMemHandle
is rarely called).This behavior seems largely unaffected by various mca options, as if opal is losing tracks of some allocations.
A similar memory growth patern is observed on
HPE Cray EX235a
nodesWhat version of Open MPI are you using? (e.g., v4.1.6, v5.0.1, git branch name and hash, etc.)
Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)
apt package on
Debian 6.1.99-1
.Please describe the system on which you are running
Linux dgx 6.1.0-23-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.99-1 (2024-07-15) x86_64 GNU/Linux
Details of the problem
The memory growth is observed during a section of the code using non-blocking direct GPU communications (Isend, Irecv) on CUDA memory.
Currently throughout the duration of a run the GPU memory usage is growing until crash due to CUDA_OUT_OF_MEMORY.

Typically the evolution of the memory usage of the code when graphed looks like this :
When tracing the issue i stumbled on this old post on the nvidia forum https://forums.developer.nvidia.com/t/memory-increase-in-gpu-aware-non-blocking-mpi-communications/275634/4, which pointed toward cuIpc handling within openmpi.
Running the same test with
--mca mpi_common_cuda_verbose 10
, I traced instances of cuIpcOpenMemHandle and cuIpcCloseMemHandle to follow the memory usage evolution which matches with the observed memory growth.I tried running the following test cases:
If we plot the memory evolution traced from calls to cuIpcOpenMemHandle and cuIpcCloseMemHandle we get the following:

Large communications in the beginning of the run are indeed freed correctly, however smaller communications does not appeared to be freed until the call to
MPI_Finalize
.Lastly if we set
--mca btl_smcuda_use_cuda_ipc 0
no memory leaks are observed confirming the issue.So far such behavior was reproduced with:
I'm also looking for hotfixes, since this issue is likely to impact us on many supercomputers.
The text was updated successfully, but these errors were encountered: