Skip to content

Commit a9d2d19

Browse files
committed
Fixed some problems in Jacobi MPI code, added nonblocking
1 parent 71dc8a3 commit a9d2d19

13 files changed

+819
-281
lines changed

content/courses/parallel-computing-introduction/codes/contour.py

-21
This file was deleted.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
---
2+
title: "Building and Running MPI Programs"
3+
toc: true
4+
type: docs
5+
weight: 29
6+
date: "2020-11-17T00:00:00"
7+
menu:
8+
parallel_programming:
9+
parent: Distributed-Memory Programming
10+
---
11+
12+
We are now ready to write our first MPI program.
13+
14+
## A Hello World example.
15+
16+
Download the appropriate code below for your choice of language.
17+
18+
#### C++
19+
20+
Each MPI program must include the `mpi.h` header file. If the MPI distribution was installed correctly, the `mpicc` or `mpicxx` or equivalent wrapper will know the appropriate path for the header and will also link to the correct library.
21+
22+
{{< spoiler text="C++" >}}
23+
{{< code-download file="/courses/parallel-computing-introduction/codes/mpi1.cxx" lang="c++" >}}
24+
{{< /spoiler >}}
25+
26+
#### Fortran
27+
28+
All new Fortran programs should use the `mpi` module provided by the MPI software. if the MPI distribution was installed correctly, the `mpif90` or equivalent will find the module and link to the correct library.
29+
30+
Any recent MPI will also provide an `mpi_f08` module. Its use is recommended, but we will wait till [later](courses/paralll-incomputing-introduction/distributed_mpi_nonblocking_exchange) to introduce it.
31+
32+
{{< spoiler text="Fortran" >}}
33+
{{< code-download file="/courses/parallel-computing-introduction/codes/mpi1.f90" lang="fortran" >}}
34+
{{< /spoiler >}}
35+
36+
#### Python
37+
38+
The `mpi4py` package consists of several objects. Many codes will need only to import the `MPI` object.
39+
40+
{{< spoiler text="Python" >}}
41+
{{< code-download file="/courses/parallel-computing-introduction/codes/mpi1.py" lang="python" >}}
42+
{{< /spoiler >}}
43+
44+
### Build It
45+
46+
If using an HPC system log in to the appropriate frontend, such as `login.hpc.virginia.edu`. If the system uses a software module system, run
47+
```bash
48+
module load gcc openmpi
49+
```
50+
51+
For Python add
52+
```bash
53+
module load <python distribution>
54+
```
55+
This will also load the correct MPI libraries. You must have already installed mpi4py. Activate the conda environment if appropriate.
56+
57+
Use mpiexec and –np **only** on the frontends! Use for short tests only!
58+
59+
Compiling C
60+
```bash
61+
mpicc –o mpihello mpi1.c
62+
```
63+
64+
Compiling C++
65+
```bash
66+
mpicxx –o mpihello mpi1.cxx
67+
```
68+
69+
Compiling Fortran
70+
```bash
71+
mpif90 –o mpihello mpi1.f90
72+
```
73+
74+
### Execute it
75+
C/C++/Fortran
76+
```bash
77+
mpiexec –np 4 ./mpihello
78+
```
79+
80+
Python
81+
```python
82+
mpiexec –np 4 python mpi1.py
83+
```
84+
85+
### Submit It
86+
87+
For HPC users, rite a Slurm script to run your program. Request 1 node and 10 cores on the standard partition. The process manager will know how many cores were requested from Slurm.
88+
```bash
89+
srun ./mpihello
90+
```
91+
Or
92+
```bash
93+
srun python mpihello\.py
94+
```
95+
96+
## Using the Intel Compilers and MPI
97+
98+
Intel compilers, MPI, and math libraries (the MKL) are widely used for high-performance applications such as MPI codes, especially on Intel-architecture systems. The appropriate MPI wrapper compilers are
99+
```bash
100+
#C
101+
mpiicc -o mpihello mpi1.c
102+
# C++
103+
mpiicpc -o mpihello mpi1.cxx
104+
# Fortran
105+
mpiifort -o mpihello mpi1.f90
106+
```
107+
Do not use mpicc, mpicxx, or mpif90 with Intel compilers. Those are provided by Intel but use the gcc suite and can result in conflicts.

content/courses/parallel-computing-introduction/distributed_mpi_project_set3.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Python programmers: once your code is working, try to use NumPy array operations
3030

3131
Fortran programmers: when writing double `do` loops over array indices, whenever possible arrange the loops in order of the indices from _right_ to _left_, for cache efficiency. Correct loop ordering can be approximately a factor of 50 faster than incorrect for a double loop.
3232

33-
Use whatever plotting package you know to make a contour plot of the result. If you do not have a preference, you may use `contour.py` below. It assumes a single output file and will transpose the result for Fortran, if output is written by column rather than row, so the orientation is the same.
33+
Use whatever plotting package you know to make a contour plot of the result. If you do not have a preference, you may use `contour.py` below. It reads files starting with a specified base name and followed by any number of digits, including none. It will assemble them into one image and show a contour plot; if given a command-line option of `-f` it will transpose each subimage for Fortran.
3434

3535
When plotting, the top of an array (row 0) is the bottom of the plot.
3636

Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Setting Up and Running MPI"
2+
title: "Setting Up MPI"
33
toc: true
44
type: docs
55
weight: 28
@@ -9,24 +9,6 @@ menu:
99
parent: Distributed-Memory Programming
1010
---
1111

12-
We are now ready to write our first MPI program. Select your choice of language below for this example.
13-
14-
## A Hello World example.
15-
16-
{{< spoiler text="C++" >}}
17-
{{< code-download file="/courses/parallel-computing-introduction/codes/mpi1.cxx" lang="c++" >}}
18-
{{< /spoiler >}}
19-
20-
{{< spoiler text="Fortran" >}}
21-
{{< code-download file="/courses/parallel-computing-introduction/codes/mpi1.f90" lang="fortran" >}}
22-
{{< /spoiler >}}
23-
24-
{{< spoiler text="Python" >}}
25-
{{< code-download file="/courses/parallel-computing-introduction/codes/mpi1.py" lang="python" >}}
26-
{{< /spoiler >}}
27-
28-
## Preparing and Running MPI Codes
29-
3012
### On a Remote Cluster
3113

3214
Refer to the instructions from your site, for example [UVA Research Computing](https://www.rc.virginia.edu/userinfo/howtos/rivanna/mpi-howto/) for our local environment. Nearly always, you will be required to prepare your code and run it through a _resource manager_ such as [Slurm](https://www.rc.virginia.edu/userinfo/rivanna/slurm/).
@@ -44,13 +26,13 @@ If you have access to a multicore computer, you can run MPI programs on it.
4426

4527
If using a compiled language, before you can build MPI codes you must install a compiler, and possibly some kind of IDE. See our guides for [C++](/courses/cpp_introduction/setting_up) or [Fortran](/courses/fortran_introduction/setting_up).
4628

47-
For Python, on all operating systems install [mpi4py](https://mpi4py.readthedocs.io/en/stable/index.html). To install mpi4py you must have a working `mpicc` compiler. For Anaconda this generally requires installing the `gcc_linux-64` package using `conda`. Instructions are [here](https://conda.io/projects/conda-build/en/latest/resources/compiler-tools.html) for Linux and command-line macOS. For Windows use the Anaconda [package manager](/courses/python-introduction/package_managers) interface and install `m2w64-gcc`.
29+
For Python, on all operating systems install [mpi4py](https://mpi4py.readthedocs.io/en/stable/index.html). To install mpi4py you must have a working `mpicc` compiler. If you use `conda` or `mamaba` from a distribution like [miniforge](https://github.com/conda-forge/miniforge), the required compiler will be installed as a dependency. For `pip` installations you must provide your own compiler setup.
4830

49-
The author of mpi4py [recommends](https://mpi4py.readthedocs.io/en/stable/install.html) using pip even with an Anaconda environment. This command will be similar on a local system to that used for installation on a multiuser system.
31+
The author of mpi4py [recommends](https://mpi4py.readthedocs.io/en/stable/install.html) using pip even with a conda environment. This command will be similar on a local system to that used for installation on a multiuser system.
5032
```no-highlight
5133
python -m pip install mpi4py
5234
```
53-
This may avoid some issues that occasionally arise in prebuilt mpi4py packages. Be sure that an appropriate `mpicc` executable is in the path.
35+
This may avoid some issues that occasionally arise in prebuilt mpi4py packages. Be sure that an appropriate `mpicc` executable is in the path. Alternatively, use the `conda-forge` channel (recommended in general for most scientific software).
5436

5537
#### Linux
5638

@@ -65,7 +47,7 @@ Installing the HPC Toolkit will also install IntelMPI.
6547
_NVIDIA HPC SDK_
6648
The NVIDIA software ships with a precompiled version of OpenMPI.
6749

68-
The headers and libraries for MPI _must_ match. Using a header from one MPI and libraries from another, or using headers from a version from one compiler and libraries from a different compiler, usually results in some difficult-to-interpret bugs. Moreover, the process manager must be compatible with the MPI used to compile the code. Because of this, if more than one compiler and especially more than one MPI version is installed, the use of _modules_ ([environment modules](http://modules.sourceforge.net/) or [lmod](https://lmod.readthedocs.io/en/latest/)) becomes particularly beneficial. Both Intel and NVIDIA provide scripts for the environment modules package (lmod can also read these), with possibly some setup required. If you plan to use mpi4py as well as compiled-language versions, creating a module for Anaconda would also be advisable.
50+
The headers and libraries for MPI _must_ match. Using a header from one MPI and libraries from another, or using headers from a version from one compiler and libraries from a different compiler, usually results in some difficult-to-interpret bugs. Moreover, the process manager must be compatible with the MPI used to compile the code. Because of this, if more than one compiler and especially more than one MPI version is installed, the use of _modules_ ([environment modules](http://modules.sourceforge.net/) or [lmod](https://lmod.readthedocs.io/en/latest/)) becomes particularly beneficial. Both Intel and NVIDIA provide scripts for the environment modules package (lmod can also read these), with possibly some setup required. If you plan to use mpi4py as well as compiled-language versions, creating a module for your Python distribution would also be advisable.
6951

7052
#### Mac OS
7153

@@ -93,67 +75,3 @@ MPI codes must generally be compiled and run through a command line on Windows.
9375

9476
The Intel oneAPI Basic Toolkit includes a customized command prompt in its folder in the Apps menu.
9577

96-
### Build It
97-
98-
In a terminal window on the frontend `rivanna.hpc.virginia.edu` run
99-
```bash
100-
module load gcc openmpi
101-
```
102-
103-
For Python add
104-
```bash
105-
module load anaconda
106-
```
107-
This will also load the correct MPI libraries. You must have already installed mpi4py. Activate the conda environment if appropriate.
108-
109-
Use mpiexec and –np **only** on the frontends! Use for short tests only!
110-
111-
Compiling C
112-
```bash
113-
mpicc –o mpihello mpi1.c
114-
```
115-
116-
Compiling C++
117-
```bash
118-
mpicxx –o mpihello mpi1.cxx
119-
```
120-
121-
Compiling Fortran
122-
```bash
123-
mpif90 –o mpihello mpi1.f90
124-
```
125-
126-
### Execute it
127-
C/C++/Fortran
128-
```bash
129-
mpiexec –np 4 ./mpihello
130-
```
131-
132-
Python
133-
```python
134-
mpiexec –np 4 python mpi1.py
135-
```
136-
137-
### Submit It
138-
139-
Write a SLURM script to run your program. Request 1 node and 10 cores on the standard partition. The process manager will know how many cores were requested from SLURM.
140-
```bash
141-
srun ./mpihello
142-
```
143-
Or
144-
```bash
145-
srun python mpihello\.py
146-
```
147-
148-
## Using the Intel Compilers and MPI
149-
150-
Intel compilers, MPI, and math libraries (the MKL) are widely used for high-performance applications such as MPI codes, especially on Intel-architecture systems. The appropriate MPI wrapper compilers are
151-
```bash
152-
#C
153-
mpiicc -o mpihello mpi1.c
154-
# C++
155-
mpiicpc -o mpihello mpi1.cxx
156-
# Fortran
157-
mpiifort -o mpihello mpi1.f90
158-
```
159-
Do not use mpicc, mpicxx, or mpif90 with Intel compilers. Those are provided by Intel but use the gcc suite and can result in conflicts.
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,35 @@
11
import sys
2+
import os
23
import argparse
3-
import glob
4+
import re
45
import numpy as np
56
import pylab as plt
67

78
parser = argparse.ArgumentParser()
89
parser.add_argument("-f", "--fortran", help="Fortran ordering", action="store_true")
9-
parser.add_argument("filename", help="output file name")
10+
parser.add_argument("filename", help="Base name for data files")
1011
args = parser.parse_args()
1112
base = args.filename
1213

13-
data=np.loadtxt(args.filename,unpack=False)
14+
files = [f for f in os.listdir('.') if re.match(base+"\d*",f)]
15+
files.sort()
1416

15-
if args.fortran:
16-
data.T
17+
subdomains=[]
18+
for file in files:
19+
data=np.loadtxt(file,unpack=False)
20+
21+
if args.fortran:
22+
data.T
23+
else:
24+
pass
25+
subdomains.append(data)
1726

18-
print(data.size, data.shape)
27+
if args.fortran:
28+
image=np.hstack(subdomains)
29+
else:
30+
image=np.vstack(subdomains)
1931

2032
fig=plt.figure()
21-
plt.contourf(data)
33+
plt.contourf(image)
2234
plt.colorbar()
2335
plt.show()

0 commit comments

Comments
 (0)