You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/courses/cpp-introduction/setting_up.md
+5-4
Original file line number
Diff line number
Diff line change
@@ -61,13 +61,14 @@ Recently, Microsoft has released the Windows Subsystem for Linux ([WSL](https://
61
61
A drawback to both Cygwin and the WSL is portability of executables. Cygwin executables must be able to find the Cygwin DLL and are therefore not standalone.
62
62
WSL executables only run on the WSL. For standalone, native binaries a good choice is _MingGW_. MinGW is derived from Cygwin.
63
63
64
-
MinGW provides a free distribution of gcc/g++/gfortran. The standard MinGW distribution is updated fairly rarely and generates only 32-bit executables. We will describe [MinGW-w64](http://mingw-w64.org/doku.php), a fork of the original project.
64
+
MinGW provides a free distribution of gcc/g++/gfortran. The standard MinGW distribution is updated fairly rarely and generates only 32-bit executables. We will describe [MinGW-w64](https://www.mingw-w64.org/), a fork of the original project.
MinGW-w64 can be installed beginning from the [MSYS2](https://www.msys2.org/) project. MSYS2 provides a significant subset of the Cygwin tools.
68
-
Download and install it.
67
+
MinGW-w64 can be installed beginning from the [MSYS2](https://www.msys2.org/) project. MSYS2 provides a significant subset of the Cygwin tools. Download and install it.
Once it has been installed, follow the [instructions](https://www.msys2.org/) to open a command-line tool, update the distribution, then install the compilers and tools.
69
+
Once it has been installed, follow the [instructions](https://www.msys2.org/) to open a command-line tool, update the distribution, then install the compilers and tools.
70
+
71
+
A discussion of installing MinGW-64 compilers for use with VSCode has been posted by Microsoft [here](https://code.visualstudio.com/docs/cpp/config-mingw).
71
72
72
73
_Intel oneAPI_
73
74
First install [Visual Studio](https://visualstudio.microsoft.com/vs/community/).
Copy file name to clipboardExpand all lines: content/courses/fortran-introduction/setting_up.md
+5-4
Original file line number
Diff line number
Diff line change
@@ -54,13 +54,14 @@ Recently, Microsoft has released the Windows Subsystem for Linux ([WSL](https://
54
54
A drawback to both Cygwin and the WSL is portability of executables. Cygwin executables must be able to find the Cygwin DLL and are therefore not standalone.
55
55
WSL executables only run on the WSL. For standalone, native binaries a good choice is _MingGW_. MinGW is derived from Cygwin.
56
56
57
-
MinGW provides a free distribution of gcc/g++/gfortran. The standard MinGW distribution is updated fairly rarely and generates only 32-bit executables. We will describe [MinGW-w64](http://mingw-w64.org/doku.php), a fork of the original project.
57
+
MinGW provides a free distribution of gcc/g++/gfortran. The standard MinGW distribution is updated fairly rarely and generates only 32-bit executables. We will describe [MinGW-w64](https://www.mingw-w64.org/), a fork of the original project.
MinGW-w64 can be installed beginning from the [MSYS2](https://www.msys2.org/) project. MSYS2 provides a significant subset of the Cygwin tools.
61
-
Download and install it.
60
+
MinGW-w64 can be installed beginning from the [MSYS2](https://www.msys2.org/) project. MSYS2 provides a significant subset of the Cygwin tools. Download and install it.
Once it has been installed, follow the [instructions](https://www.msys2.org/) to open a command-line tool, update the distribution, then install the compilers and tools.
62
+
Once it has been installed, follow the [instructions](https://www.msys2.org/) to open a command-line tool, update the distribution, then install the compilers and tools. For Fortran users, the `mingw64` repository may be preferable to the `ucrt64` repo. To find packages, visit their [repository](https://packages.msys2.org/package/).
63
+
64
+
A discussion of installing MinGW-64 compilers for use with VSCode has been posted by Microsoft [here](https://code.visualstudio.com/docs/cpp/config-mingw). To use mingw64 rather than ucrt64, simply substitute the text string. Fortran users should install both the C/C++ and Fortran extensions for VSCode.
64
65
65
66
_Intel oneAPI_
66
67
Download and install the basic toolkit and, for Fortran, the HPC toolkit.
Copy file name to clipboardExpand all lines: content/courses/parallel-computing-introduction/distributed_mpi_global3.md
+25-5
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ In many-to-many collective communications, all processes in the communicator gro
12
12
13
13
## Barrier
14
14
15
-
When `MPI_Barrier` is invoked, each process pauses until all processes in the communicator group have called this function. The `MPI_BARRIER` is used to synchronize processes. It should be used sparingly, since it "serializes" a parallel program. Most of the global communication routines contain an implicit barrier so an explicit `MPI_Barrier` is not required.
15
+
When `MPI_Barrier` is invoked, each process pauses until all processes in the communicator group have called this function. The `MPI_BARRIER` is used to synchronize processes. It should be used sparingly, since it "serializes" a parallel program. Most of the collective communication routines contain an implicit barrier so an explicit `MPI_Barrier` is not required.
16
16
17
17
### C++
18
18
```c++
@@ -65,11 +65,11 @@ As the examples in the previous chapter demonstrated, when MPI_Reduce is called,
65
65
The syntax for `MPI_Allreduce` is identical to that of `MPI_Reduce` but with the root number omitted.
66
66
67
67
```c
68
-
intMPI_Allreduce(void *operand, void *result, int count, MPI_Datatype type, MPI_Op operator, MPI_Comm comm );
68
+
intMPI_Allreduce(void *operand, void *result, int ncount, MPI_Datatype type, MPI_Op operator, MPI_Comm comm );
@@ -137,7 +137,7 @@ Modify the example gather code in your language of choice to perform an Allgathe
137
137
138
138
In MPI_Alltoall, each process sends data to every other process. Let us consider the simplest case, when each process sends one item to every other process. Suppose there are three processes and rank 0 has an array containing the values \[0,1,2\], rank 1 has \[10,11,12\], and rank 2 has \[20,21,22\]. Rank 0 keeps (or sends to itself) the 0 value, sends 1 to rank 1, and 2 to rank 2. Rank 1 sends 10 to rank 0, keeps 11, and sends 12 to rank 2. Rank 2 sends 20 to rank 0, 21 to rank 1, and keeps 22.
139
139
140
-
distributed_mpi_global2.md:{{< figure src="/courses/parallel-computing-introduction/img/alltoall.png" caption="Alltoall. Note that as depicted, the values in the columns are transposed to values as rows." >}}
140
+
{{< figure src="/courses/parallel-computing-introduction/img/alltoall.png" caption="Alltoall. Note that as depicted, the values in the columns are transposed to values as rows." >}}
141
141
142
142
### C++
143
143
{{< spoiler text="alltoall.cxx" >}}
@@ -158,4 +158,24 @@ Two more general forms of alltoall exist; `MPI_Alltoallv`, which is similar to `
158
158
159
159
## MPI_IN_PLACE
160
160
161
-
We often do not need the send buffer once the message has been communicated, and allocating two buffers wastes memory and requires some amount of unneeded communication. Several MPI procedures allow the special receive buffer `MPI_IN_PLACE`. When used, the send buffer variable is overwritten with the transmitted data. The expected send and receive buffers must be the same size for this to be valid.
161
+
We often do not need one buffer once the message has been communicated, and allocating two buffers wastes memory and requires some amount of unneeded communication. MPI collective procedures allow the special buffer `MPI_IN_PLACE`. This special value can be used instead of the receive buffer in `Scatter` and `Scatterv`; in the other collective functions it takes the place of the send buffer. The expected send and receive buffers must be the same size for this to be valid. As usual for mpi2py, the Python name of the variable is MPI.IN_PLACE.
Copy file name to clipboardExpand all lines: content/courses/parallel-computing-introduction/distributed_mpi_types.md
+57-1
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: "MPI Types"
2
+
title: "MPI Derived Types"
3
3
toc: true
4
4
type: docs
5
5
weight: 220
@@ -9,3 +9,59 @@ menu:
9
9
---
10
10
11
11
Modern programming languages provide data structures that may be called "structs," or "classes," or "types." These data structures permit grouping of different quantities under a single variable name.
12
+
13
+
MPI also provides a general type that enables programmer-defined datatypes. Unlike arrays, which must be adjacent in memory, MPI derived datatypes may consist of elements in noncontiguous locations in memory.
14
+
15
+
While more general derived MPI datatypes are available, one of the most commonly used is the `MPI_TYPE_VECTOR`. This creates a group of elements of size _blocklength_ separated by a constant interval, called the _stride_, in memory. Examples would be generating a type for columns in a row-major-oriented language, or rows in a column-major-oriented language.
16
+
17
+
{{< figure src="/courses/parallel-computing-introduction/img/mpi_vector_type.png" caption="Layout in memory for vector type. In this example, the blocklength is 4, the stride is 6, and the count is 3." >}}
For both C++ and Fortran, `ncount`, `blocklength`, and `stride` must be integers. The `oldtype` is a pre-existing type, usually a built-in MPI Type such as MPI_FLOAT or MPI_REAL. For C++ the new type would be declared as an `MPI_Datatype`, unless it corresponds to an existing built-in type. For Fortran `oldtype` would be an integer if not a built-in type. The `newtype` is a name chosen by the programmer.
Our example will construct an $N \times $M$ array of floating-point numbers. In C++ and Python we will exchange the "halo" columns using the MPI type, and the rows in the usual way. In Fortran we will exchange "halo" rows with MPI type and columns with ordinary Sendrecv.
Copy file name to clipboardExpand all lines: content/courses/python-high-performance/_index.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -23,17 +23,17 @@ For this tutorial, it is assumed that you have experience with programming in Py
23
23
24
24
## Setup
25
25
26
-
To follow along for the [Serial Optimization](#serial-optimization-strategies) and [Multiprocessing](#multiprocessing) examples, you can execute the code examples on your own computer or on UVA's high-performance computing cluster, Rivanna. Examples described in the last section, [Distributed Parallelization](#distributed-parallelization), are best executed on UVA's high-performance computing platform, Rivanna.
26
+
To follow along for the [Serial Optimization](#serial-optimization-strategies) and [Multiprocessing](#multiprocessing) examples, you can execute the code examples on your own computer or on UVA's high-performance computing cluster. Examples described in the last section, [Distributed Parallelization](#distributed-parallelization), are best executed on UVA's high-performance computing platform.
27
27
28
28
If you are using your local computer, we recommend the Anaconda distribution (<ahref="https://www.anaconda.com/distribution/"target="balnk_">download</a>) to run the code examples. Anaconda provides multiple Python versions, an integrated development environment (IDE) with editor and profiler, Jupyter notebooks, and an easy to use package environment manager.
29
29
30
-
**If you are using Rivanna, follow these steps to verify that your account is active:**
30
+
**If you are using UVA HPC, follow these steps to verify that your account is active:**
31
31
32
-
### Check your Access to Rivanna
32
+
### Check your Access to UVA HPC
33
33
34
-
1. In your web browser, got to <ahref="https://rivanna-desktop.hpc.virginia.edu"target="_blank">rivanna-desktop.hpc.virginia.edu</a>. This takes you to our FastX web portal that lets you launch a remote desktop environment on Rivanna. If you are off Grounds, you must be connected through the UVA Anywhere VPN client.
34
+
1. In your web browser, go to <ahref="https://fastx.hpc.virginia.edu"target="_blank">fastx.hpc.virginia.edu</a>. This takes you to our FastX web portal that lets you launch a remote desktop environment on a frontend. If you are off Grounds, you must be connected through the UVA Anywhere VPN client.
35
35
36
-
2. Log in with your UVA credentials and start a MATE session. You can find a more detailed description of the Rivanna login procedure <ahref="https://www.rc.virginia.edu/userinfo/rivanna/logintools/fastx/"target="_blank">here</a>.
36
+
2. Log in with your UVA credentials and start a MATE session. You can find a more detailed description of the FastX login procedure <ahref="https://www.rc.virginia.edu/userinfo/rivanna/logintools/fastx/"target="_blank">here</a>.
37
37
***User name:** Your UVA computing id (e.g. mst3k; don't enter your entire email address)
38
38
***Password:** Your UVA Netbadge password
39
39
@@ -44,14 +44,14 @@ python -V
44
44
```
45
45
You will obtain a response like
46
46
```
47
-
Python 3.6.6
47
+
Python 3.11.3
48
48
```
49
49
Now type
50
50
```
51
51
spyder &
52
52
```
53
53
54
-
For Jupyterlab you can use [Open OnDemand](https://rivanna-portal.hpc.virginia.edu). Jupyterlab is one of the Interactive Apps. Note that these apps submit a job to the compute nodes. If you are working on quick development and testing and you wish to use the frontend, to run Jupyter or Jupyterlab on the FastX portal you can run
54
+
For Jupyterlab you can use [Open OnDemand](https://ood.hpc.virginia.edu). Jupyterlab is one of the Interactive Apps. Note that these apps submit a job to the compute nodes. If you are working on quick development and testing and you wish to use the frontend, to run Jupyter or Jupyterlab on the FastX portal you can run
0 commit comments