Skip to content

Cirrus migration to E1000 system

There was a full service maintenance on Tuesday 12th March 2024 from 0900 - 1700 GMT which allowed some major changes on the Cirrus service.

Tip

If you need help or have questions on the Cirrus E1000 migration please contact the Cirrus service desk

Change of authentication protocol

We changed the authentication protocol on Cirrus from ldap to freeipa.

This change was transparent to users but you may have noticed a change from username@cirrus to username@eidf within your SAFE account.

You should be able to connect using your existing Cirrus authentication factors i.e. your ssh key pair and your TOTP token.

If you do experience issues, then please reset your tokens and try to reconnect. If problems persist then please contact the service desk.

Further details on Connecting to Cirrus

New /work file system

We replaced the existing lustre /work file system with a new more performant lustre file system, E1000.

The old /work file system will be available as read-only and we ask you to copy any files you require onto the new /work file system.

The old read-only file system was be removed on 15th May 024 so please ensure all data is retrieved by then.

For username in project x01, to copy data from
/mnt/lustre/indy2lfs/work/x01/x01/username/directory_to_copy
to
/work/x01/x01/username/destination_directory
you would do this by running:

cp -r /mnt/lustre/indy2lfs/work/x01/x01/username/directory_to_copy \
/work/x01/x01/username/destination_directory

Further details of Data Management and Transfer on Cirrus

Note

Slurm Pending Jobs
As the underlying pathname for /work will be changing with the addition of the new file system, all of the pending work in the slurm queue will be removed during the migration. When the service is returned, please resubmit your slurm jobs to Cirrus.

CSE Module Updates

Our Computational Science and Engineering (CSE) Team have taken the opportunity of the arrival of the new file system to update modules and also remove older versions of modules. A full list of the changes to the modules can be found below.

Please contact the service desk if you have concerns about the removal of any of the older modules.

REMOVED MODULES

Package/module Advice for users
altair-hwsolvers/13.0.213 Please contact the service desk if you wish to use Altair Hyperworks.
altair-hwsolvers/14.0.210 Please contact the service desk if you wish to use Altair Hyperworks.
ansys/18.0 Please contact the service desk if you wish to use ANSYS Fluent.
ansys/19.0 Please contact the service desk if you wish to use ANSYS Fluent.

autoconf/2.69

Please use autoconf/2.71

bison/3.6.4

Please use bison/3.8.2

boost/1.67.0

Please use boost/1.84.0

boost/1.73.0

Please use boost/1.84.0

cmake/3.17.3

cmake/3.22.1

Please use cmake/3.25.2

CUnit/2.1.3

Please contact the service desk if you wish to use CUnit.

dolfin/2019.1.0-intel-mpi

dolfin/2019.1.0-mpt

Dolfin is no longer supported and will not be replaced.

eclipse/2020-09

Please contact the service desk if you wish to use Eclipse.

expat/2.2.9

Please use expat/2.6.0

fenics/2019.1.0-intel-mpi

fenics/2019.1.0-mpt

Fenics is no longer supported and will not be replaced.

fftw/3.3.8-gcc8-ompi4

fftw/3.3.8-intel19

fftw/3.3.9-ompi4-cuda11-gcc8 

fftw/3.3.8-intel18   

fftw/3.3.9-impi19-gcc8  

fftw/3.3.10-intel19-mpt225   

fftw/3.3.10-ompi4-cuda116-gcc8

Please use one of the following

fftw/3.3.10-gcc10.2-mpt2.25

fftw/3.3.10-gcc10.2-impi20.4

fftw/3.3.10-gcc10.2-ompi4-cuda11.8

fftw/3.3.10-gcc12.3-impi20.4

fftw/3.3.10-intel20.4-impi20.4

flacs/10.9.1

flacs-cfd/20.1

flacs-cfd/20.2

flacs-cfd/21.1

flacs-cfd/21.2

flacs-cfd/22.1


Please contact the helpdesk if you wish to use FLACS.

forge/24.0

Please use forge/24.0

gcc/6.2.0

Please use gcc/8.2.0 or later

gcc/6.3.0

Please use gcc/8.2.0 or later

gcc/12.2.0-offload

Please use gcc/12.3.0-offload

gdal/2.1.2-gcc

gdal/2.1.2-intel 

gdal/2.4.4-gcc

Please use gcc/3.6.2-gcc

git/2.21.0

Please use git/2.37.3

gmp/6.2.0-intel 

gmp/6.2.1-mpt

gmp/6.3.0-mpt

Please use gmp/6.3.0-gcc or gmp/6.3.0-intel 

gnu-parallel/20200522-gcc6

Please use gnu-parallel/20240122-gcc10

gromacs/2022.1
gromacs/2022.1-gpu
gromacs/2022.3-gpu

Please use one of:
gromacs/2023.4
gromacs/2023.4-gpu

hdf5parallel/1.10.4-intel18-impi18

Please use hdf5parallel/1.14.3-intel20-impi20

hdf5parallel/1.10.6-gcc6-mpt225

Please use hdf5parallel/1.14.3-gcc10-mpt225

hdf5parallel/1.10.6-intel18-mpt225

Please use hdf5parallel/1.14.3-intel20-mpt225

hdf5parallel/1.10.6-intel19-mpt225

Please use hdf5parallel/1.14.3-intel20-mpt225

hdf5serial/1.10.6-intel18

Please use hdf5serial/1.14.3-intel20

horovod/0.25.0

horovod/0.25.0-gpu

horovod/0.26.1-gpu

Please use one of the pytorch or tensorflow modules

htop/3.1.2 

Please use htop/3.2.1 

intel 18.0 compilers etc

Please use Intel 19.5 or later; or oneAPI

intel 19.0 compilers etc

Please use Intel 19.5 or later

lammps/23Jun2022_intel19_mpt
lammps/8Feb2023-gcc8-impi
lammps/23Sep2023-gcc8-impi
lammps/8Feb2023-gcc8-impi-cuda118
lammps/23Sep2023-gcc8-impi-cuda118

Please use one of:

lammps/15Dec2023-gcc10.2-impi20.4
lammps-gpu/15Dec2023-gcc10.2-impi20.4-cuda11.8

libxkbcommon/1.0.1

Please contact the service desk if you wish to use libxkbcommon.

libnsl/1.3.0 

Please contact the helpdesk if you wish to use libnsl.

libpng/1.6.30

This is no longer supported as the central module.

libtirpc/1.2.6

Please contact the helpdesk if you wish to use libtirpc.

libtool/2.4.6

Please use libtool/2.4.7
nco/4.9.3 Please use nco/5.1.9
nco/4.9.7 Please use nco/5.1.9
ncview/2.1.7 Please use ncview/2.1.10

netcdf-parallel/4.6.2-intel18-impi18

Please use netcdf-parallel/4.9.2-intel20-impi20

netcdf-parallel/4.6.2-intel19-mpt225

Please use netcdf-parallel/4.9.2-intel20-mpt225

nvidia/cudnn/8.2.1-cuda-11.6

nvidia/cudnn/8.2.1-cuda-11.6

nvidia/cudnn/8.9.4-cuda-11.6

nvidia/cudnn/8.9.7-cuda-11.6

Please use one of the following

nvidia/cudnn/8.6.0-cuda-11.6

nvidia/cudnn/8.6.0-cuda-11.6

nvidia/nvhpc/22.11-no-gcc

Use nvidia/nvhpc/22.11

nvidia/tensorrt/7.2.3.4

Please use nvidia/tensorrt/8.4.3.1-u2

openfoam/v8.0

Please consider a later version, e.g., v10.0

openfoam/v9.0

Please consider a later version, e.g, v11.0

openfoam/v2006

Please consider a later version, e.g., v2306

openmpi/4.1.2-cuda-11.6

openmpi/4.1.4

openmpi/4.1.4-cuda-11.6

openmpi/4.1.4-cuda-11.6-nvfortran

openmpi/4.1.4-cuda-11.8

openmpi/4.1.4-cuda-11.8-nvfortran

openmpi/4.1.5

openmpi/4.1.5-cuda-11.6


Please use one of the following

openmpi/4.1.6

openmpi/4.1.6-cuda-11.6

openmpi/4.1.6-cuda-11.6-nvfortran

openmpi/4.1.6-cuda-11.8

openmpi/4.1.6-cuda-11.8-nvfortran


petsc/3.13.2-intel-mpi-18

petsc/3.13.2-mpt

Please contact the helpdesk if you require a more recent version of PETSc.

pyfr/1.14.0-gpu

Please use pyfr/1.15.0-gpu

pytorch/1.12.1

pytorch/1.12.1-gpu

Please use one of the following

pytorch/1.13.1

pytorch/1.13.1-gpu

quantum-espresso/6.5-intel-19

Please use QE/6.5-intel-20.4

specfem3d

Please contact the helpdesk if you wish to use SPECFEM3D

starccm+/14.04.013-R8

starccm+/14.06.013-R8 → 2019.3.1-R8

starccm+/15.02.009-R8 → 2020.1.1-R8 

starccm+/15.04.010-R8 → 2020.2.1-R8 

starccm+/15.06.008-R8 → 2020.3.1-R8

starccm+/16.02.009 → 2021.1.1

Please contact the helpdesk if you wish to use STAR-CCM+

tensorflow/2.9.1-gpu

tensorflow/2.10.0

tensorflow/2.11.0-gpu

Please use one of the following

tensorflow/2.13.0

tensorflow/2.15.0

tensorflow/2.13.0-gpu

ucx/1.9.0

ucx/1.9.0-cuda-11.6

ucx/1.9.0-cuda-11.8

Please use one of the following

ucx/1.15.0

ucx/1.15.0-cuda-11.6

ucx/1.15.0-cuda-11.8

vasp-5.4.4-intel19-mpt220


zlib/1.2.11

Please use zlib/1.3.1