Skip to content

Commit d588991

Browse files
committed
updated table and slurm scripts for Using Slurm from a Terminal
1 parent eab0001 commit d588991

File tree

8 files changed

+19
-18
lines changed

8 files changed

+19
-18
lines changed

content/notes/slurm-from-cli/scripts/array.slurm

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
#!/bin/bash
22
#SBATCH --ntasks=1
3-
#SBATCH --partition=standard
4-
#SBATCH -A myalloc
3+
#SBATCH --partition=interactive
4+
#SBATCH -A hpc_training
55
#SBATCH --time=3:00:00
66
#SBATCH -o out%A.%a
77
#SBATCH -e err%A.%a

content/notes/slurm-from-cli/scripts/hello.slurm

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
#SBATCH --ntasks=1
44
#SBATCH --mem=32000 # mb total memory
55
#SBATCH --time=1-12:00:00
6-
#SBATCH --partition=standard
7-
#SBATCH --account=myalloc
6+
#SBATCH --partition=interactive
7+
#SBATCH --account=hpc_training
88

99
module purge
1010
module load anaconda

content/notes/slurm-from-cli/scripts/hybrid.slurm

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
1+
#!/bin/bash
12
#SBATCH -N 3
23
#SBATCH --ntasks-per-node=4
34
#SBATCH -c 10
45
#SBATCH -p parallel
5-
#SBATCH -A myalloc
6+
#SBATCH -A hpc_training
67
#SBATCH -t 05:00:00
78
89
#SBATCH --mail-type=END

content/notes/slurm-from-cli/scripts/multicore.slurm

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
#SBATCH -n 1
22
#SBATCH -c 25
3-
#SBATCH -p standard
4-
#SBATCH -A myalloc
3+
#SBATCH -p interactive
4+
#SBATCH -A hpc_training
55
#SBATCH -t 05:00:00
66
77
#SBATCH --mail-type=END

content/notes/slurm-from-cli/scripts/multinode.slurm

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
#!/bin/bash
22
#SBATCH --nodes=3
33
#SBATCH --ntasks-per-node=40
4-
#SBATCH --account=myalloc
4+
#SBATCH --account=hpc_training
55
#SBATCH -p parallel
66
#SBATCH -t 10:00:00
77

content/notes/slurm-from-cli/scripts/slow.slurm

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
#SBATCH --nodes=1
33
#SBATCH --ntasks=1
44
#SBATCH --time=00:30:00
5-
#SBATCH --partition=standard
6-
#SBATCH --account=myalloc
5+
#SBATCH --partition=interactive
6+
#SBATCH --account=hpc_training
77

88
module purge
99

content/notes/slurm-from-cli/section1.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -47,13 +47,13 @@ Slurm refers to the root process as a **task**. By default, each task is assigne
4747
SLURM refers to queues as __partitions__ . We do not have a default partition; each job must request one explicitly.
4848

4949
{{< table >}}
50-
| Queue Name | Purpose | Job Time Limit | Memory / Node | Cores / Node |
50+
| Queue Name | Purpose | Job Time Limit | Max Memory / Node / Job | Max Cores / Node |
5151
| :-: | :-: | :-: | :-: | :-: |
52-
| standard | For jobs on a single compute node | 7 days | 384 GB | 40 |
53-
| gpu | For jobs that can use general purpose GPU’s<br /> (P100,V100,A100) | 3 days | 256 GB<br />384 GB<br />1 TB | 28<br />40<br />128 |
54-
| parallel | For large parallel jobs on up to 50 nodes (<= 1500 CPU cores) | 3 days | 384 GB | 40<br /> |
55-
| largemem | For memory intensive jobs | 4 days | 768 GB<br />1 TB | 16 |
56-
| dev | To run jobs that are quick tests of code | 1 hour | 128 GB | 40 |
52+
| standard | For jobs on a single compute node | 7 days | 375 GB | 37 |
53+
| gpu | For jobs that can use general purpose GPU’s<br /> (A40,A100,A6000,V100,RTX3090) | 3 days | 1953 GB | 125 |
54+
| parallel | For large parallel jobs on up to 50 nodes (<= 1500 CPU cores) | 3 days | 375 GB | 40<br /> |
55+
| largemem | For memory intensive jobs | 4 days | 768 GB<br />1 TB | 45 |
56+
| interactive | For quick interactive sessions (up to two RTX2080 GPUs) | 12 hours | 216 GB | 37 |
5757
{{< /table >}}
5858

5959
To see an online list of available partitions, from a command line type

content/notes/slurm-from-cli/section2.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -50,8 +50,8 @@ The lines starting with `#SBATCH` are the resource requests. They are called "p
5050
#SBATCH --ntasks=1
5151
#SBATCH --mem=32000 # mb total memory
5252
#SBATCH –-time=1-12:00:00
53-
#SBATCH --partition=standard
54-
#SBATCH --account=myalloc
53+
#SBATCH --partition=interactive
54+
#SBATCH --account=hpc_training
5555
```
5656
Here we are requesting
5757
* 1 node, 1 task

0 commit comments

Comments
 (0)