You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download the appropriate code below for your choice of language.
17
+
18
+
#### C++
19
+
20
+
Each MPI program must include the `mpi.h` header file. If the MPI distribution was installed correctly, the `mpicc` or `mpicxx` or equivalent wrapper will know the appropriate path for the header and will also link to the correct library.
All new Fortran programs should use the `mpi` module provided by the MPI software. if the MPI distribution was installed correctly, the `mpif90` or equivalent will find the module and link to the correct library.
29
+
30
+
Any recent MPI will also provide an `mpi_f08` module. Its use is recommended, but we will wait till [later](courses/paralll-incomputing-introduction/distributed_mpi_nonblocking_exchange) to introduce it.
If using an HPC system log in to the appropriate frontend, such as `login.hpc.virginia.edu`. If the system uses a software module system, run
47
+
```bash
48
+
module load gcc openmpi
49
+
```
50
+
51
+
For Python add
52
+
```bash
53
+
module load <python distribution>
54
+
```
55
+
This will also load the correct MPI libraries. You must have already installed mpi4py. Activate the conda environment if appropriate.
56
+
57
+
Use mpiexec and –np **only** on the frontends! Use for short tests only!
58
+
59
+
Compiling C
60
+
```bash
61
+
mpicc –o mpihello mpi1.c
62
+
```
63
+
64
+
Compiling C++
65
+
```bash
66
+
mpicxx –o mpihello mpi1.cxx
67
+
```
68
+
69
+
Compiling Fortran
70
+
```bash
71
+
mpif90 –o mpihello mpi1.f90
72
+
```
73
+
74
+
### Execute it
75
+
C/C++/Fortran
76
+
```bash
77
+
mpiexec –np 4 ./mpihello
78
+
```
79
+
80
+
Python
81
+
```python
82
+
mpiexec –np 4 python mpi1.py
83
+
```
84
+
85
+
### Submit It
86
+
87
+
For HPC users, rite a Slurm script to run your program. Request 1 node and 10 cores on the standard partition. The process manager will know how many cores were requested from Slurm.
88
+
```bash
89
+
srun ./mpihello
90
+
```
91
+
Or
92
+
```bash
93
+
srun python mpihello\.py
94
+
```
95
+
96
+
## Using the Intel Compilers and MPI
97
+
98
+
Intel compilers, MPI, and math libraries (the MKL) are widely used for high-performance applications such as MPI codes, especially on Intel-architecture systems. The appropriate MPI wrapper compilers are
99
+
```bash
100
+
#C
101
+
mpiicc -o mpihello mpi1.c
102
+
# C++
103
+
mpiicpc -o mpihello mpi1.cxx
104
+
# Fortran
105
+
mpiifort -o mpihello mpi1.f90
106
+
```
107
+
Do not use mpicc, mpicxx, or mpif90 with Intel compilers. Those are provided by Intel but use the gcc suite and can result in conflicts.
Copy file name to clipboardExpand all lines: content/courses/parallel-computing-introduction/distributed_mpi_project_set3.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -30,7 +30,7 @@ Python programmers: once your code is working, try to use NumPy array operations
30
30
31
31
Fortran programmers: when writing double `do` loops over array indices, whenever possible arrange the loops in order of the indices from _right_ to _left_, for cache efficiency. Correct loop ordering can be approximately a factor of 50 faster than incorrect for a double loop.
32
32
33
-
Use whatever plotting package you know to make a contour plot of the result. If you do not have a preference, you may use `contour.py` below. It assumes a single output file and will transpose the result for Fortran, if output is written by column rather than row, so the orientation is the same.
33
+
Use whatever plotting package you know to make a contour plot of the result. If you do not have a preference, you may use `contour.py` below. It reads files starting with a specified base name and followed by any number of digits, including none. It will assemble them into one image and show a contour plot; if given a command-line option of `-f` it will transpose each subimage for Fortran.
34
34
35
35
When plotting, the top of an array (row 0) is the bottom of the plot.
Refer to the instructions from your site, for example [UVA Research Computing](https://www.rc.virginia.edu/userinfo/howtos/rivanna/mpi-howto/) for our local environment. Nearly always, you will be required to prepare your code and run it through a _resource manager_ such as [Slurm](https://www.rc.virginia.edu/userinfo/rivanna/slurm/).
@@ -44,13 +26,13 @@ If you have access to a multicore computer, you can run MPI programs on it.
44
26
45
27
If using a compiled language, before you can build MPI codes you must install a compiler, and possibly some kind of IDE. See our guides for [C++](/courses/cpp_introduction/setting_up) or [Fortran](/courses/fortran_introduction/setting_up).
46
28
47
-
For Python, on all operating systems install [mpi4py](https://mpi4py.readthedocs.io/en/stable/index.html). To install mpi4py you must have a working `mpicc` compiler. For Anaconda this generally requires installing the `gcc_linux-64` package using `conda`. Instructions are [here](https://conda.io/projects/conda-build/en/latest/resources/compiler-tools.html) for Linux and command-line macOS. For Windows use the Anaconda [package manager](/courses/python-introduction/package_managers) interface and install `m2w64-gcc`.
29
+
For Python, on all operating systems install [mpi4py](https://mpi4py.readthedocs.io/en/stable/index.html). To install mpi4py you must have a working `mpicc` compiler. If you use `conda` or `mamaba` from a distribution like [miniforge](https://github.com/conda-forge/miniforge), the required compiler will be installed as a dependency. For `pip` installations you must provide your own compiler setup.
48
30
49
-
The author of mpi4py [recommends](https://mpi4py.readthedocs.io/en/stable/install.html) using pip even with an Anaconda environment. This command will be similar on a local system to that used for installation on a multiuser system.
31
+
The author of mpi4py [recommends](https://mpi4py.readthedocs.io/en/stable/install.html) using pip even with a conda environment. This command will be similar on a local system to that used for installation on a multiuser system.
50
32
```no-highlight
51
33
python -m pip install mpi4py
52
34
```
53
-
This may avoid some issues that occasionally arise in prebuilt mpi4py packages. Be sure that an appropriate `mpicc` executable is in the path.
35
+
This may avoid some issues that occasionally arise in prebuilt mpi4py packages. Be sure that an appropriate `mpicc` executable is in the path. Alternatively, use the `conda-forge` channel (recommended in general for most scientific software).
54
36
55
37
#### Linux
56
38
@@ -65,7 +47,7 @@ Installing the HPC Toolkit will also install IntelMPI.
65
47
_NVIDIA HPC SDK_
66
48
The NVIDIA software ships with a precompiled version of OpenMPI.
67
49
68
-
The headers and libraries for MPI _must_ match. Using a header from one MPI and libraries from another, or using headers from a version from one compiler and libraries from a different compiler, usually results in some difficult-to-interpret bugs. Moreover, the process manager must be compatible with the MPI used to compile the code. Because of this, if more than one compiler and especially more than one MPI version is installed, the use of _modules_ ([environment modules](http://modules.sourceforge.net/) or [lmod](https://lmod.readthedocs.io/en/latest/)) becomes particularly beneficial. Both Intel and NVIDIA provide scripts for the environment modules package (lmod can also read these), with possibly some setup required. If you plan to use mpi4py as well as compiled-language versions, creating a module for Anaconda would also be advisable.
50
+
The headers and libraries for MPI _must_ match. Using a header from one MPI and libraries from another, or using headers from a version from one compiler and libraries from a different compiler, usually results in some difficult-to-interpret bugs. Moreover, the process manager must be compatible with the MPI used to compile the code. Because of this, if more than one compiler and especially more than one MPI version is installed, the use of _modules_ ([environment modules](http://modules.sourceforge.net/) or [lmod](https://lmod.readthedocs.io/en/latest/)) becomes particularly beneficial. Both Intel and NVIDIA provide scripts for the environment modules package (lmod can also read these), with possibly some setup required. If you plan to use mpi4py as well as compiled-language versions, creating a module for your Python distribution would also be advisable.
69
51
70
52
#### Mac OS
71
53
@@ -93,67 +75,3 @@ MPI codes must generally be compiled and run through a command line on Windows.
93
75
94
76
The Intel oneAPI Basic Toolkit includes a customized command prompt in its folder in the Apps menu.
95
77
96
-
### Build It
97
-
98
-
In a terminal window on the frontend `rivanna.hpc.virginia.edu` run
99
-
```bash
100
-
module load gcc openmpi
101
-
```
102
-
103
-
For Python add
104
-
```bash
105
-
module load anaconda
106
-
```
107
-
This will also load the correct MPI libraries. You must have already installed mpi4py. Activate the conda environment if appropriate.
108
-
109
-
Use mpiexec and –np **only** on the frontends! Use for short tests only!
110
-
111
-
Compiling C
112
-
```bash
113
-
mpicc –o mpihello mpi1.c
114
-
```
115
-
116
-
Compiling C++
117
-
```bash
118
-
mpicxx –o mpihello mpi1.cxx
119
-
```
120
-
121
-
Compiling Fortran
122
-
```bash
123
-
mpif90 –o mpihello mpi1.f90
124
-
```
125
-
126
-
### Execute it
127
-
C/C++/Fortran
128
-
```bash
129
-
mpiexec –np 4 ./mpihello
130
-
```
131
-
132
-
Python
133
-
```python
134
-
mpiexec –np 4 python mpi1.py
135
-
```
136
-
137
-
### Submit It
138
-
139
-
Write a SLURM script to run your program. Request 1 node and 10 cores on the standard partition. The process manager will know how many cores were requested from SLURM.
140
-
```bash
141
-
srun ./mpihello
142
-
```
143
-
Or
144
-
```bash
145
-
srun python mpihello\.py
146
-
```
147
-
148
-
## Using the Intel Compilers and MPI
149
-
150
-
Intel compilers, MPI, and math libraries (the MKL) are widely used for high-performance applications such as MPI codes, especially on Intel-architecture systems. The appropriate MPI wrapper compilers are
151
-
```bash
152
-
#C
153
-
mpiicc -o mpihello mpi1.c
154
-
# C++
155
-
mpiicpc -o mpihello mpi1.cxx
156
-
# Fortran
157
-
mpiifort -o mpihello mpi1.f90
158
-
```
159
-
Do not use mpicc, mpicxx, or mpif90 with Intel compilers. Those are provided by Intel but use the gcc suite and can result in conflicts.
0 commit comments