Skip to content

Commit e3f0496

Browse files
committed
Added Jupyterlab and Spyder standalones to python-introduction
1 parent ca0a574 commit e3f0496

23 files changed

+279
-49
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
program sendrows
2+
use mpi_f08
3+
4+
double precision, allocatable, dimension(:,:) :: u,w
5+
integer :: N
6+
integer :: i,j
7+
8+
integer :: nr, nc
9+
integer :: rank, nprocs, tag=0
10+
integer :: errcode
11+
type(MPI_Status) :: recv_status
12+
integer :: ncount, blocklength, stride
13+
type(MPI_Datatype) :: rows
14+
15+
integer, parameter :: root=0
16+
integer :: src, dest
17+
18+
!Initialize MPI, get the local number of columns
19+
call MPI_INIT()
20+
call MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs)
21+
call MPI_COMM_RANK(MPI_COMM_WORLD,rank)
22+
23+
!We will make the matrix scale with number of processes for simplicity
24+
nr=nprocs
25+
nc=nprocs
26+
27+
allocate(u(nr,nc),w(nr,nc))
28+
u=0.0d0
29+
w=0.0d0
30+
31+
!Cyclic sending
32+
if (rank /= nprocs-1) then
33+
dest=0
34+
else
35+
dest=rank+1
36+
endif
37+
38+
ncount=1
39+
blocklength=nc
40+
stride=nr
41+
42+
call MPI_Type_vector(ncount,blocklength,stride,MPI_DOUBLE_PRECISION,rows)
43+
44+
call MPI_TYPE_COMMIT(rows)
45+
46+
do i=0,nprocs-1
47+
if (rank==i) then
48+
tag=i
49+
call MPI_Recv(w(i+2,1),1,rows,source,tag,MPI_COMM_WORLD,recv_status)
50+
call MPI_Send(u(i+1,1),1,rows,dest,tag,MPI_COMM_WORLD)
51+
endif
52+
enddo
53+
54+
call MPI_TYPE_FREE(rows)
55+
56+
!Print neatly
57+
do i=1,nr
58+
write(*,*) "|",u(i,:),"|"," |",w(i,:),"|"
59+
enddo
60+
61+
call MPI_Finalize()
62+
63+
end program
64+
65+
66+
67+

content/courses/parallel-computing-introduction/distributed_mpi_setup.md

+27-3
Original file line numberDiff line numberDiff line change
@@ -13,12 +13,36 @@ menu:
1313

1414
Refer to the instructions from your site, for example [UVA Research Computing](https://www.rc.virginia.edu/userinfo/howtos/rivanna/mpi-howto/) for our local environment. Nearly always, you will be required to prepare your code and run it through a _resource manager_ such as [Slurm](https://www.rc.virginia.edu/userinfo/rivanna/slurm/).
1515

16-
For Python, you will need to install mpi4py. You may wish to create a conda environment for it. On the UVA system you must use `pip` rather than conda.
16+
It is generally preferable, and may be required, that mpi4py be installed from the conda-forge repository. On a cluster, mpi4py will need to link to a locally-built version of MPI that can communicate with the resource manager. The conda-forge maintainers provide instructions for this [here]((https://conda-forge.org/docs/user/tipsandtricks/#using-external-message-passing-interface-mpi-libraries). In our example, we will use openmpi. First we must load the modules for the compiler and MPI version:
17+
1718
```bash
1819
module load gcc openmpi
19-
module load anaconda
20-
pip install --user mpi4py
2120
```
21+
We must not install OpenMPI directly from conda-forge; rather we make use of the "hooks" they have provided.
22+
```bash
23+
module list openmpi
24+
```
25+
In our example, the module list returns
26+
```bash
27+
Currently Loaded Modules Matching: openmpi
28+
1) openmpi/4.1.4
29+
```
30+
Now we check that our version of OpenMPI is available
31+
```bash
32+
conda search -f openmpi -c conda-forge
33+
```
34+
Most versions are there, so we can install the one we need
35+
```bash
36+
conda install -c conda-forge "openmpi=4.1.4=external_*"
37+
```
38+
Be sure to include the `external_*` string.
39+
40+
After this completes, we can install mpi4py
41+
```bash
42+
conda install -c conda-forge mpi4py
43+
```
44+
45+
2246

2347
### On a Local Computer
2448

content/courses/parallel-computing-introduction/distributed_mpi_types.md

+41-9
Original file line numberDiff line numberDiff line change
@@ -13,28 +13,46 @@ Modern programming languages provide data structures that may be called "structs
1313

1414
MPI also provides a general type that enables programmer-defined datatypes. Unlike arrays, which must be adjacent in memory, MPI derived datatypes may consist of elements in non-contiguous locations in memory.
1515

16+
## Example: MPI_TYPE_VECTOR
17+
1618
While more general derived MPI datatypes are available, one of the most commonly used is the `MPI_TYPE_VECTOR`. This creates a group of elements of size _blocklength_ separated by a constant interval, called the _stride_, in memory. Examples would be generating a type for columns in a row-major-oriented language, or rows in a column-major-oriented language.
1719

1820
{{< figure src="/courses/parallel-computing-introduction/img/mpi_vector_type.png" caption="Layout in memory for vector type. In this example, the blocklength is 4, the stride is 6, and the count is 3." >}}
1921

2022
C++
2123
```c++
24+
int ncount, blocklength, stride;
2225
MPI_Datatype newtype;
23-
MPI_Type_vector(ncount, blocklength, stride, oldtype, newtype);
26+
// Note that oldtype is not passed by reference but newtype is
27+
MPI_Type_vector(ncount, blocklength, stride, oldtype, &newtype);
2428
```
29+
2530
Fortran
2631
```fortran
27-
integer newtype
32+
integer :: ierr
33+
integer :: count, blocklength, stride
34+
integer :: newtype
2835
!code
2936
call MPI_TYPE_VECTOR(ncount, blocklength, stride, oldtype, newtype, ierr)
3037
```
31-
For both C++ and Fortran, `ncount`, `blocklength`, and `stride` must be integers. The `oldtype` is a pre-existing type, usually a built-in MPI Type such as MPI_FLOAT or MPI_REAL. For C++ the new type would be declared as an `MPI_Datatype`, unless it corresponds to an existing built-in type. For Fortran `oldtype` would be an integer if not a built-in type. The `newtype` is a name chosen by the programmer.
38+
Fortran 2008
39+
```fortran
40+
use mpi_f08
41+
integer :: ierr
42+
integer :: count, blocklength, stride
43+
type(MPI_Datatype) :: newtype
44+
!code
45+
call MPI_Type_vector(count, blocklength, stride, oldtype, newtype, ierr)
46+
```
47+
For both C++ and Fortran, `ncount`, `blocklength`, and `stride` must be integers. The `oldtype` is a pre-existing type, usually a built-in MPI Type such as MPI_FLOAT or MPI_REAL. For C++ the new type would be declared as an `MPI_Datatype`, unless it corresponds to an existing built-in type. For Fortran the types would be an integer if not a built-in type, whereas for Fortran 2008 they are a `type` declared similarly to the C++ equivalent. The `newtype` is a name chosen by the programmer.
3248

3349
Python
3450
```python
3551
newtype = oldtype.Create_vector(ncount, blocklength, stride)
3652
```
3753

54+
## Committing the Type
55+
3856
A derived type must be _committed_ before it can be used.
3957

4058
```c++
@@ -44,25 +62,39 @@ Fortran
4462
```fortran
4563
call MPI_TYPE_COMMIT(newtype,ierr)
4664
```
65+
4766
Python
4867
```
4968
newtype.Commit()
5069
```
5170

52-
To use our newly committed type in an MPI communication function, we must pass it the starting position of the data to be placed into the type.
71+
## Using a Type
72+
73+
To use our newly committed type in an MPI communication function, we must pass it the starting position of the data to be placed into the type. Notice that the item count is the _number of instances_ of the type. In our examples this will usually be 1.
5374

5475
C++
5576
```c++
56-
MPI_Send(&a[0][i],1,newtype,i,MPI_COMM_WORLD);
5777
//We need to pass the first element by reference because an array element
5878
//is not a pointer
79+
MPI_Send(&u[0][i],1,newtype,dest,tag,MPI_COMM_WORLD);
80+
MPI_Recv(&w[0][j],nrl,newtype,source,tag,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
5981
```
6082
6183
Fortran
62-
```
63-
MPI_Send(a(1)(i),1,newtype,i,MPI_COMM_WORLD,ierr)
64-
```
65-
6684
85+
Note that we do not use array slicing in this example. It is often best to avoid that, especially when using nonblocking communications, because a slice involves a copy.
86+
```fortran
87+
MPI_Send(u(i,1),1,newtype,dest,tag,MPI_COMM_WORLD,ierr)
88+
MPI_Recv(w(j,1),ncl,newtype,source,tag,MPI_COMM_WORLD,MPI_STATUS_IGNORE,ierr)
89+
```
6790

91+
Python
6892

93+
Python NumPy arrays do not generally expose the underlying pointers to the values. We could perhaps create a view (slice), but that would copy the data, which is what we want to avoid. Therefore, we should create the MPI Vector type as above, then use a method such as `frombuffer` to enable MPI to access the raw values. We also need to provide `frombuffer` with an offset _in bytes_ to tell it where in memory to start reading the values.
94+
```python
95+
#define i and j somewhere
96+
sendcol = i
97+
recvcol = j
98+
comm.Send([np.frombuffer(u.data,np.float,offset=sendcol*np.dtype('float').itemsize),1,newtype],dest)
99+
comm.Recv([w[0:j],MPI.FLOAT,source)
100+
```

content/courses/python-high-performance/_index.md

+14-7
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ For this tutorial, it is assumed that you have experience with programming in Py
2828

2929
To follow along for the [Serial Optimization](#serial-optimization-strategies) and [Multiprocessing](#multiprocessing) examples, you can execute the code examples on your own computer or on UVA's high-performance computing cluster. Examples described in the last section, [Distributed Parallelization](#distributed-parallelization), are best executed on UVA's high-performance computing platform.
3030

31-
If you are using your local computer, we recommend the Anaconda distribution (<a href="https://www.anaconda.com/distribution/" target="balnk_">download</a>) to run the code examples. Anaconda provides multiple Python versions, an integrated development environment (IDE) with editor and profiler, Jupyter notebooks, and an easy-to-use package environment manager.
31+
If you are using your local computer for your personal applications, not related to work, you can install the Anaconda distribution (<a href="https://www.anaconda.com/distribution/" target="balnk_">download</a>) to run the code examples. Anaconda provides multiple Python versions, an integrated development environment (IDE) with editor and profiler, Jupyter notebooks, and an easy-to-use package environment manager. If you will or might use the installation for work, or just prefer a more minimal setup that you can more easily customize, we suggest Miniforge (https://github.com/conda-forge/miniforge).
3232

3333
**If you are using UVA HPC, follow these steps to verify that your account is active:**
3434

@@ -40,26 +40,33 @@ If you are using your local computer, we recommend the Anaconda distribution (<a
4040
* **User name:** Your UVA computing id (e.g. mst3k; don't enter your entire email address)
4141
* **Password:** Your UVA Netbadge password
4242

43-
3. Starting Spyder (Anaconda's IDE): Open a terminal window and type
43+
3. Starting Spyder: You must first activate an environment and install Spyder into it. Open a terminal window and type
4444
```
45-
module load anaconda
45+
module load miniforge
4646
python -V
4747
```
4848
You will obtain a response like
4949
```
5050
Python 3.11.3
5151
```
52+
If your environment does not include it, install the package
53+
```
54+
conda install spyder
55+
```
5256
Now type
5357
```
5458
spyder &
5559
```
5660

57-
For Jupyterlab you can use [Open OnDemand](https://ood.hpc.virginia.edu). Jupyterlab is one of the Interactive Apps. Note that these apps submit jobs to compute nodes. If you are working on quick development and testing and you wish to use the frontend, to run Jupyter or Jupyterlab on the FastX portal you can run
61+
For Jupyterlab you can use [Open OnDemand](https://ood.hpc.virginia.edu). Jupyterlab is one of the Interactive Apps. Note that these apps submit jobs to compute nodes. If you need to use Jupyterlab outside of the OOD interactive app, you should install it into your environment similarly to installng Spyder.
62+
```
63+
conda install jupyterlab nbconvert
64+
```
65+
The `nbconvert` paackages allows Jupyter to export your cells to various formats, including Python scripts. You can then invoke it with
5866
```
59-
module load anaconda
60-
anaconda-navigator &
67+
jupyter-lab &
6168
```
62-
This will allow you to choose Jupyter, Jupyterlab, or Spyder. Jupyter and Jupyterlab will open an instance of Firefox. You can ignore the error messages as long as Jupyter or Jupyterlab opens and works. You will not be able to install anything on the Navigator home page that does not have a `launch` button.
69+
It will open in the default Web browser.
6370

6471
Please note that parallelization methods may not work well or at all in Jupyter.
6572
<br>

content/courses/python-high-performance/compiled_code.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -19,11 +19,11 @@ In order to wrap code written in a compiled language, you must have a compiler f
1919

2020
#### Windows
2121

22-
If you do not use Fortran, you can install MS Visual Studio. A community edition is available free for personal use and includes C and C++ compilers. If you might use Fortran, a good option is [MinGW-64](https://www.mingw-w64.org/). This may also provide good compatibility with Anaconda even if you do not expect to use Fortran. MinGW-64 provides several options for builds of the `gcc` (Gnu Compiler Collection). The `ucrt` build is recommended but may be a little rough around the edges, at least for Fortran users. The older `mingw64` build may be more suitable. Either or both can be installed on the same system; the path will select the compiler used by Python or the IDE. A nice tutorial on installing MingGW-64 and using it with the free [VSCode IDE](https://code.visualstudio.com/) is [here](https://code.visualstudio.com/docs/cpp/config-mingw). You must install VSCode extensions for C/C++ and, if appropriate, Fortran. To install the mingw64 version, simply substitute that name for ucrt in the `pacman` instructions. For Fortran, after the basic toolchain is installed, run
22+
If you do not use Fortran, you can install MS Visual Studio. A community edition is available free for personal use and includes C and C++ compilers. If you might use Fortran, a good option is [MinGW-64](https://www.mingw-w64.org/). This may also provide good compatibility with a Python installation such as Miniforge, even if you do not expect to use Fortran. MinGW-64 provides several options for builds of the `gcc` (Gnu Compiler Collection). The `ucrt` build is recommended but may be a little rough around the edges, at least for Fortran users. The older `mingw64` build may be more suitable. Either or both can be installed on the same system; the path will select the compiler used by Python or the IDE. A nice tutorial on installing MingGW-64 and using it with the free [VSCode IDE](https://code.visualstudio.com/) is [here](https://code.visualstudio.com/docs/cpp/config-mingw). You must install VSCode extensions for C/C++ and, if appropriate, Fortran. To install the mingw64 version, simply substitute that name for ucrt in the `pacman` instructions. For Fortran, after the basic toolchain is installed, run
2323
```no-highlight
2424
pacman -S mingw-w64-x86_64-gcc-fortran
2525
```
26-
Now go to Settings and edit your system environment variables to add `C:\msys2\mingw64\bin` to `path`. Once that is done, you can use a command line or the Anaconda PowerShell to run f2py as shown below for Linux. After that move the resulting library to an appropriate location in your PYTHONPATH.
26+
Now go to Settings and edit your system environment variables to add `C:\msys2\mingw64\bin` to `path`. Once that is done, you can use a command line or a local PowerShell, such as the Miniforge shell, to run f2py as shown below for Linux. After that move the resulting library to an appropriate location in your PYTHONPATH.
2727

2828
#### Mac OS
2929

@@ -68,7 +68,7 @@ It is also possible to wrap the Fortran code in C by various means, such as the
6868

6969
The [CFFI] (https://cffi.readthedocs.io/en/latest/overview.html) package can be used to wrap C code. CFFI (C Foreign Function Interface) wraps C _libraries_ into Python code. To use it, prepare a shared (dynamic) library of functions. This requires a C compiler, and the exact steps vary depending on your operating system. Windows compilers produce a file called a _DLL_, Unix/Linux shared libraries end in `.so`, and macOS shared libraries end in `.dylib`.
7070

71-
CFFI is not a base package, but is often included in Python distributions such as Anaconda. It may also be included as an add-on for other installations such as system Pythons, since some other package such as a cryptography library may require it. Before installing CFFI, first attempt to import it
71+
CFFI is not a base package, but is often included in Python distributions such as Miniforge. It may also be included as an add-on for other installations such as system Pythons, since some other package such as a cryptography library may require it. Before installing CFFI, first attempt to import it
7272
```python
7373
import cffi
7474
```
@@ -175,7 +175,7 @@ More detailed information describing the use of Cython can be found <a href="htt
175175

176176
## Numba
177177

178-
Numba is available with the Anaconda Python distribution. It compiles selected functions using the LLVM compiler. Numba is accessed through a decorator. Decorators in Python are wrappers that modify the functions without the need to change the code.
178+
Numba is available through the Miniforge Python distribution. It compiles selected functions using the LLVM compiler. Numba is accessed through a decorator. Decorators in Python are wrappers that modify the functions without the need to change the code.
179179

180180
**Exercise:**
181181
A well-known but slow way to compute pi is by a Monte Carlo method. Given a circle of unit radius inside a square with side length 2, we can estimate the area inside and outside the circle by throwing “darts” (random locations). Since the area of the circle is pi and the area of the square is 4, the ratio of hits inside the circle to the total thrown is pi/4.

content/courses/python-high-performance/gpu_acceleration.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ In most cases, it may be advisable to set up a separate environment for differen
2121

2222
[CuPy](https://cupy.dev/) is an implementation of many of the features of NumPy and SciPy that takes advantage of the GPU. It is one of the simplest introductions to GPU programming.
2323
It can be [installed](https://docs.cupy.dev/en/stable/install.html) with conda through the `conda-forge` channel.
24-
If using the Anaconda Navigator GUI, install the channel, then switch to it and install cuPy through the interface. For installing from the command line, use
24+
For installing from the command line, a terminal in MacOS or Linux or a Miniforge or Anaconda shell on Windows , use
2525
```bash
2626
conda install -c conda-forge cupy
2727
```

0 commit comments

Comments
 (0)