Before we submit jobs to the scheduler, we are going to run them interactively.
First we allocate three nodes from the head node.
salloc -N1 --exclusive
Once the nodes are ready we are going to have a bash with the environment set up. To carry over the environment variables, which SLURM provides, we use a small script.
cat > slurm-ssh.sh << \EOF
#!/bin/bash
# Usage: salloc <options> slurm-ssh
first=$(scontrol show hostname $SLURM_NODELIST | head -n1)
env=$(printenv | grep SLURM | sed -rn "s/=(.*)/='\1'/p" | paste -d' ' -s)
exec ssh $first -t "$env zsh"
EOF
chmod +x slurm-ssh.sh
./slurm-ssh.sh
To run gromacs we are going to load the spack modules intelmpi
and gromacs
.
module load intelmpi $(module avail gromacs 2>&1 |tail -n1)
time mpirun -ppn 6 gmx_mpi mdrun -ntomp 6 -s /fsx/input/gromacs/benchRIB.tpr
That should take around 15min on c5n.18xl.
$ time mpirun -ppn 6 gmx_mpi mdrun -ntomp 6 -s /fsx/input/gromacs/benchRIB.tpr
Core t (s) Wall t (s) (%)
Time: 29931.822 831.441 3600.0
(ns/day) (hour/ns)
Performance: 4.157 5.773
mpirun -ppn 6 gmx_mpi mdrun -ntomp 6 -s /fsx/input/gromacs/benchRIB.tpr 0.02s user 0.01s system 0% cpu 14:37.98 total
Test