Using the GPU nodes on Snowy
Snowy has a moderate number of nodes with a single Nvidia T4 GPU installed. This page contains instructions on how to use them.
Who has access?
How do we access them?
#!/bin/bash #SBATCH -J jobname #SBATCH -A snicxxxx-x-xx #SBATCH -t 03-00:00:00 #SBATCH --exclusive #SBATCH -p node #SBATCH -N 1 #SBATCH -M snowy #SBATCH --gres=gpu:1 #SBATCH --gpus-per-node=1 ##for jobs shorter than 15 min (max 4 nodes): #SBATCH --qos=short
salloc -A staff -N 1 -M snowy --gres=gpu:1 --gpus-per-node=1
The time limit for jobs using GPUs is currently 3 days.
Note 2. The gres=mps option is mostly for students in courses that don't use the gpu's all the time. Most other users will probably not use this option.
#!/bin/bash #SBATCH -J jobname #SBATCH -A snicxxxx-x-xx #SBATCH -t 03-00:00:00 #SBATCH -p core #SBATCH -n 2 #SBATCH -M snowy #SBATCH --gres=gpu:1 #SBATCH --gres=mps:50
How do we use CUDA and related software?
module use /sw/EasyBuild/snowy/modules/all/ module load intelcuda/2019b
module use /sw/EasyBuild/snowy/modules/all module load fosscuda/2019b
module load fosscuda/2018b
Where can I read more?
configuring the GPUs. Technically the GPUs are configured as what is
known as a "gres" (generic resource), which is then tracked using
"tres" (trackable resources). This means that most GPU-related options
you can find in Slurm's documentation is expected to work.
Slurm's GPU-related documentation is here:
You can also search for "GPU" in the sbatch man-page on Snowy.