Skip to content

Instantly share code, notes, and snippets.

@Wildcarde
Last active September 30, 2021 16:04
Show Gist options
  • Save Wildcarde/125a6c52cd6d516bdc8d4357eed43e95 to your computer and use it in GitHub Desktop.
Save Wildcarde/125a6c52cd6d516bdc8d4357eed43e95 to your computer and use it in GitHub Desktop.
Quick demos for submitting jobs to slurm under various circumstances

Slurm Getting Started / How-Tos

This document is meant to be provide some quick reference materials for gettings started with the basics of working with scripts against a slurm based computational cluster.

The 5 scripts below should all be created or downloaded into the same folder and submit commands run from that base directory.

#!/usr/bin/env bash
# This will submit a script via sbatch and capture the job id as a variable for use in bash
job1taskid=`sbatch --parsable ./01job.sh`
# This then submits a job that waits for the first job to finish before running
sbatch --dependency=$job1taskid ./02job.sh
#!/usr/bin/env bash
## This is just a small test job to print some output and sleep so we can demostrate dependencies
#SBATCH -J 'initaljob'
#SBATCH -o slurm-%j.out
#SBATCH -p all
#SBATCH -t 5
echo "INITAL TASK STARTS"
sleep 60
#!/usr/bin/env bash
## This job will be scheduled at the same time as 01job.sh but will wait for that to finish
## before running.
#SBATCH -J 'secondjob'
#SBATCH -o slurm-%j.out
#SBATCH -p all
#SBATCH -t 5
echo "second task ran"
#!/usr/bin/env bash
#this script dumps some basic information to an output file when submitted via sbatch
#name the job nodeinfo and place it's output in a file named slurm-<jobid>.out
#set partition to 'all' this isn't strictly necessary but it's good practice
#set time to 5 minutes so jobs get killed if something weird happens
#SBATCH -J 'nodeinfo'
#SBATCH -o slurm-%A_%a.out
#SBATCH -p all
#SBATCH -t 5
#SBATCH -c 2
# clean up environment so submission environment doesn't impact the job
module purge
module load suite2p/0.10.2
# Dump test info about the specific host the job is being run on
echo "In the directory: `pwd` "
echo "As the user: `whoami` "
echo "on host: `hostname` "
# Dump the job array information if it's available
echo "Array ID: $SLURM_ARRAY_JOB_ID"
echo "Array Index: $SLURM_ARRAY_TASK_ID"
# test to make sure you are getting all the cpu cores you asked for
echo "With access to cpu id(s): "
cat /proc/$$/status | grep Cpus_allowed_list
# run the small python demo script used to demonstrate access env variables inside a python script
./pyenvprint.py
#!/usr/bin/env python
# this brings in tools to navigate the OS directly
import os
# this captures the array task id if it is set as an environmental variable
taskid=None
taskid=os.getenv('SLURM_ARRAY_TASK_ID')
# check if we're running an array job or not to decide what we'll be outputing into the slurm output.
if taskid:
print('array task id: ',taskid)
else:
print(os.environ)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment