Skip to content

Commit 2982169

Browse files
committed
Last few tweaks for miniforge
1 parent 6821c5e commit 2982169

27 files changed

+188
-48
lines changed

content/courses/fortran-introduction/array_intrinsics.md

+3
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ These create new arrays from old. `PACK` and `UNPACK` can be used to "flatten" m
2525

2626
```fortran
2727
! Convert an array from one shape to another (total size must match)
28+
! SHAPE must be a rank-one array whose elements are sizes in each dimension
2829
RESHAPE(SOURCE,SHAPE[,PAD][,ORDER])
2930
! Combine two arrays of same shape and size according to MASK
3031
! Take from ARR1 where MASK is .true., ARR2 where it is .false.
@@ -37,8 +38,10 @@ SPREAD(SOURCE,DIM,NCOPIES)
3738
```
3839
**Example**
3940
```fortran
41+
!Array and mask are of size NxM
4042
mask=A<0
4143
merge(A,0,mask)
44+
B=reshape(A,(/M,N/))
4245
! for C=1, D=[1,2]
4346
print *, spread(C, 1, 2) ! "1 1"
4447
print *, spread(D, 1, 2) ! "1 1 2 2"
Binary file not shown.

content/courses/parallel-computing-introduction/codes/mpi_vector_type.f90

+24-5
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,10 @@ program sendrows
77

88
integer :: nr, nc
99
integer :: rank, nprocs, tag=0
10-
integer :: errcode
11-
type(MPI_Status) :: recv_status
10+
integer :: err, errcode
1211
integer :: ncount, blocklength, stride
12+
type(MPI_Status), dimension(:), allocatable :: mpi_status_arr
13+
type(MPI_Request), dimension(:), allocatable :: mpi_requests
1314
type(MPI_Datatype) :: rows
1415

1516
integer, parameter :: root=0
@@ -25,13 +26,19 @@ program sendrows
2526
nc=nprocs
2627

2728
allocate(u(nr,nc),w(nr,nc))
29+
allocate(mpi_requests(2*nprocs),mpi_status_arr(2*nprocs))
2830
u=0.0d0
2931
w=0.0d0
3032

3133
!Cyclic sending
32-
if (rank /= nprocs-1) then
34+
if (rank == nprocs-1) then
35+
src=rank-1
3336
dest=0
37+
else if (rank==0) then
38+
src=nprocs-1
39+
dest=rank+1
3440
else
41+
src=rank-1
3542
dest=rank+1
3643
endif
3744

@@ -46,11 +53,23 @@ program sendrows
4653
do i=0,nprocs-1
4754
if (rank==i) then
4855
tag=i
49-
call MPI_Recv(w(i+2,1),1,rows,source,tag,MPI_COMM_WORLD,recv_status)
50-
call MPI_Send(u(i+1,1),1,rows,dest,tag,MPI_COMM_WORLD)
56+
print *, i,i+1,i+nprocs+1
57+
if (i==0) then
58+
call MPI_Irecv(w(nprocs,1),1,rows,src,tag,MPI_COMM_WORLD,mpi_requests(i+1))
59+
call MPI_Isend(u(i+1,1),1,rows,dest,tag,MPI_COMM_WORLD,mpi_requests(i+nprocs+1))
60+
else if (i==nprocs-1) then
61+
call MPI_Irecv(w(1,1),1,rows,src,tag,MPI_COMM_WORLD,mpi_requests(i+1))
62+
call MPI_Isend(u(nprocs,1),1,rows,dest,tag,MPI_COMM_WORLD,mpi_requests(i+nprocs+1))
63+
else
64+
call MPI_Irecv(w(i+2,1),1,rows,src,tag,MPI_COMM_WORLD,mpi_requests(i+1))
65+
call MPI_Isend(u(i+1,1),1,rows,dest,tag,MPI_COMM_WORLD,mpi_requests(i+nprocs+1))
66+
endif
5167
endif
5268
enddo
5369

70+
call MPI_Waitall(size(mpi_requests),mpi_requests,mpi_status_arr)
71+
72+
5473
call MPI_TYPE_FREE(rows)
5574

5675
!Print neatly

content/courses/parallel-computing-introduction/distributed_mpi_setup.md

+12-8
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,18 @@ menu:
99
parent: Distributed-Memory Programming
1010
---
1111

12+
Using MPI requires access to a computer with at least one node with multiple cores. The Message Passing Interface is a standard and there are multiple implementations of it, so a choice of distribution must be made. Popular implementations include [MPICH](https://www.mpich.org/), [OpenMPI](https://www.open-mpi.org/), [MVAPICH2](https://mvapich.cse.ohio-state.edu/), and [IntelMPI](https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.gdkhva). MPICH, OpenMPI, and MVAPICH2 must be built for a system, so a compiler must be chosen as well. IntelMPI is typically used with the Intel compiler and is provided by the vendor as part of their [HPC Toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/hpc-toolkit.html#gs.gdkm8y). MVAPICH2 is a version of MPICH that is specialized for high-speed [Infiniband](https://en.wikipedia.org/wiki/InfiniBand) networks on high-performance clusters, so would generally not be appropriate for installation on individual computers.
13+
1214
### On a Remote Cluster
1315

14-
Refer to the instructions from your site, for example [UVA Research Computing](https://www.rc.virginia.edu/userinfo/howtos/rivanna/mpi-howto/) for our local environment. Nearly always, you will be required to prepare your code and run it through a _resource manager_ such as [Slurm](https://www.rc.virginia.edu/userinfo/rivanna/slurm/).
16+
Refer to the instructions from your site, for example [UVA Research Computing](https://www.rc.virginia.edu/userinfo/howtos/rivanna/mpi-howto/) for our local environment. Nearly always, you will be required to prepare your code and run it through a _resource manager_ such as [Slurm](https://www.rc.virginia.edu/userinfo/rivanna/slurm/). Most HPC sites use a _modules_ system, so generally you will need to load modules for an MPI version and usually the corresponding compiler. It is important to be sure that you use a version of MPI that can communicate correctly with your resource manager.
17+
```bash
18+
module load gcc
19+
module load openmpi
20+
```
21+
is an example setup for compiled-language users.
1522

16-
It is generally preferable, and may be required, that mpi4py be installed from the conda-forge repository. On a cluster, mpi4py will need to link to a locally-built version of MPI that can communicate with the resource manager. The conda-forge maintainers provide instructions for this [here]((https://conda-forge.org/docs/user/tipsandtricks/#using-external-message-passing-interface-mpi-libraries). In our example, we will use openmpi. First we must load the modules for the compiler and MPI version:
23+
For Python, the mpi4py package is most widely available. It is generally preferable, and may be required, that mpi4py be installed from the conda-forge repository. On a cluster, mpi4py will need to link to a locally-built version of MPI that can communicate with the resource manager. The conda-forge maintainers provide instructions for this [here](https://conda-forge.org/docs/user/tipsandtricks/#using-external-message-passing-interface-mpi-libraries). In our example, we will use openmpi. First we must load the modules for the compiler and MPI version:
1724

1825
```bash
1926
module load gcc openmpi
@@ -42,8 +49,6 @@ After this completes, we can install mpi4py
4249
conda install -c conda-forge mpi4py
4350
```
4451

45-
46-
4752
### On a Local Computer
4853

4954
If you have access to a multicore computer, you can run MPI programs on it.
@@ -56,7 +61,7 @@ The author of mpi4py [recommends](https://mpi4py.readthedocs.io/en/stable/instal
5661
```no-highlight
5762
python -m pip install mpi4py
5863
```
59-
This may avoid some issues that occasionally arise in prebuilt mpi4py packages. Be sure that an appropriate `mpicc` executable is in the path. Alternatively, use the `conda-forge` channel (recommended in general for most scientific software).
64+
This may avoid some issues that occasionally arise in prebuilt mpi4py packages. Be sure that an appropriate `mpicc` executable is in the path. Alternatively, use the `conda-forge` channel (recommended in general for most scientific software). Most of the time, if you are installing mpi4py from conda-forge, you can simply install the package. MPICH is the default when installed as a prerequisite for conda-forge.
6065

6166
#### Linux
6267

@@ -71,7 +76,7 @@ Installing the HPC Toolkit will also install IntelMPI.
7176
_NVIDIA HPC SDK_
7277
The NVIDIA software ships with a precompiled version of OpenMPI.
7378

74-
The headers and libraries for MPI _must_ match. Using a header from one MPI and libraries from another, or using headers from a version from one compiler and libraries from a different compiler, usually results in some difficult-to-interpret bugs. Moreover, the process manager must be compatible with the MPI used to compile the code. Because of this, if more than one compiler and especially more than one MPI version is installed, the use of _modules_ ([environment modules](http://modules.sourceforge.net/) or [lmod](https://lmod.readthedocs.io/en/latest/)) becomes particularly beneficial. Both Intel and NVIDIA provide scripts for the environment modules package (lmod can also read these), with possibly some setup required. If you plan to use mpi4py as well as compiled-language versions, creating a module for your Python distribution would also be advisable.
79+
The headers and libraries for MPI _must_ match. Using a header from one MPI and libraries from another, or using headers from a version from one compiler and libraries from a different compiler, usually results in some difficult-to-interpret bugs. Moreover, the process manager must be compatible with the MPI used to compile the code. Because of this, if more than one compiler and especially more than one MPI version is installed, the use of _modules_ ([environment modules](http://modules.sourceforge.net/) or [lmod](https://lmod.readthedocs.io/en/latest/)) becomes particularly beneficial. Both Intel and NVIDIA provide scripts for the environment modules package (lmod can also read these), with possibly some setup required. If you plan to use mpi4py as well as compiled-language versions, creating a module for your Python distribution would also be advisable. Installation of a module system on an individual Linux system is straightforward for an administrator with some experience.
7580

7681
#### Mac OS
7782

@@ -87,7 +92,7 @@ The NVIDIA suite is not available for Mac OS.
8792
#### Windows
8893

8994
_GCC_
90-
The simplest way to use OpenMPI on Windows is through [Cygwin](https://www.cygwin.com/). In this case, the gcc compiler suite would first be installed, with g++ and/or gfortran added. Then the openmpi package could also be installed through the cygwin package manager.
95+
The easiest way to use OpenMPI on Windows is through [Cygwin](https://www.cygwin.com/). In this case, the gcc compiler suite would first be installed, with g++ and/or gfortran added. Then the openmpi package could also be installed through the cygwin package manager.
9196

9297
_Intel oneAPI_
9398
Install the HPC Toolkit.
@@ -98,4 +103,3 @@ Download the package when it is available.
98103
MPI codes must generally be compiled and run through a command line on Windows. Cygwin users can find a variety of tutorials online, for example [here](https://www.youtube.com/watch?v=ENH70zSaztM).
99104

100105
The Intel oneAPI Basic Toolkit includes a customized command prompt in its folder in the Apps menu.
101-
Loading
Loading
Loading
Loading
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
{
2+
"cells": [],
3+
"metadata": {},
4+
"nbformat": 4,
5+
"nbformat_minor": 5
6+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
def hello():
2+
print("Hello")
3+
return None

content/courses/python-introduction/.ipynb_checkpoints/untitled-checkpoint.txt

Whitespace-only changes.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": 1,
6+
"id": "6b7fb079-d472-4e57-aa65-ca9363bfc563",
7+
"metadata": {},
8+
"outputs": [],
9+
"source": [
10+
"import hello_func"
11+
]
12+
},
13+
{
14+
"cell_type": "code",
15+
"execution_count": 2,
16+
"id": "319ee56c-7d85-47ef-adf8-6927cebf3b5d",
17+
"metadata": {},
18+
"outputs": [
19+
{
20+
"name": "stdout",
21+
"output_type": "stream",
22+
"text": [
23+
"Hello\n"
24+
]
25+
}
26+
],
27+
"source": [
28+
"hello_func.hello()"
29+
]
30+
},
31+
{
32+
"cell_type": "code",
33+
"execution_count": null,
34+
"id": "624b0850-81eb-4035-bc8a-1dc6bfdb9d4d",
35+
"metadata": {},
36+
"outputs": [],
37+
"source": []
38+
}
39+
],
40+
"metadata": {
41+
"kernelspec": {
42+
"display_name": "Python 3 (ipykernel)",
43+
"language": "python",
44+
"name": "python3"
45+
},
46+
"language_info": {
47+
"codemirror_mode": {
48+
"name": "ipython",
49+
"version": 3
50+
},
51+
"file_extension": ".py",
52+
"mimetype": "text/x-python",
53+
"name": "python",
54+
"nbconvert_exporter": "python",
55+
"pygments_lexer": "ipython3",
56+
"version": "3.11.6"
57+
}
58+
},
59+
"nbformat": 4,
60+
"nbformat_minor": 5
61+
}
Binary file not shown.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
def hello():
2+
print("Hello")
3+
return None
Loading
Loading
Loading
Loading
Loading
Loading
Loading

0 commit comments

Comments
 (0)