-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running parallel MPI RCall #204
Comments
Not sure I can help, but look at Are you only using R to read in data? If so, what format is it in? |
Thank you. I just use R to read phylogenetic trees through the I was wondering if you have insights here on why this might be, and if there is a solution (I think I read someplace else that in parallel you just open 1 session of R instead of one to each process, might this be it?) |
Sorry, I don't really know. The only thing I can suggest is manually |
I want to second the suspicion that R isn't being set up correctly on all the nodes -- might be worthwhile to set I'm go to go ahead and close this as stale as this issue has been inactive for a long time now and a lot has changed in the meantime. If you managed to get this working and want to drop a breadcrumb here for posterity, that would be great. |
Hi, I'm trying to run an MPI parallel job, where RCall is used to read some data. When running the code in one node with multiple cores (using
julia -p <n>
), it works, but when I use cores in different nodes (usingjulia --machinefile
) I get the following error:(I deleted some of the directory paths and changed them for
...
) And so forth for each node. Following this thread, I tried usingimport RCall; @everywhere using RCall
, but I get the same error. I'm trying to run this on a cluster that uses Slurm.The text was updated successfully, but these errors were encountered: