News - 2026-01-12
Happy New Year, and welcome to the first HPC newsletter of 2026.
This newsletter’s details:
Access to External GPU-Accelerated HPC Resources
In addition to the GPU nodes in Stanage University of Sheffield researchers can also access a range of external HPC systems with GPU capability for suitable workloads. Available options include:
N8 Bede GPU cluster (V100 & Grace Hopper) – Sheffield is a partner organisation, and project resource requests can be made via the University’s shared facilities route.
National and European facilities (AIRR / Isambard-AI, Dawn, EuroHPC, and others) – Access is available through competitive calls, each with specific eligibility and workload requirements. Current opportunities
If you are interested in applying for a specific external resource call then please contact IT Services’ Research and Innovation team for support with your application.
Note
Applications must be submitted by University staff. PGRs may access these resources via their PI once an award has been granted.
Scheduled Maintenance
Routine Stanage Maintenance (proposed)
We intend to introduce periodic, pre-announced maintenance windows on Stanage to allow for security updates, firmware upgrades, and general system housekeeping.
A two-day maintenance window is currently planned: from 08:00 on 9th February 2026 until 17:00 on 10th February 2026. Further details have been shared via email.
Network maintenance – 17th January 2026
There will be network infrastructure maintenance on Saturday 17th January, during which connectivity to Stanage may be briefly disrupted. This may affect SSH access and file transfers for up to one hour at some point during the day. Further details have been shared via email.
Upcoming change: salloc behaviour for interactive jobs
The behaviour of salloc will be updated to align with Slurm’s recommended usage, changing salloc so that it behaves more like how srun is currently used for interactive jobs.
The main benefit of using salloc for interactive work is that it allows you to launch srun commands within an interactive allocation, which is not possible when using srun alone. This enables, for example, interactive testing of MPI workloads.
The change is planned for the coming months, and users will be notified once it has been applied.
More H100 NVL GPU Nodes on Stanage
In addition to the two H100 GPU nodes added last month, a further two new H100 GPU nodes are now available on Stanage in the gpu-h100-nvl partition. Each node is equipped with 4x NVIDIA H100 NVL GPUs, bringing the total to 16x H100 NVL GPUs.
These GPUs are designed for high-performance computing and AI workloads, delivering significant performance improvements over the older NVIDIA H100 PCIe GPUs. Please see New H100 NVL GPU Nodes on Stanage for more details, including earlier benchmarking results.
We’d love your feedback
If you’ve tried the new H100 NVL nodes, we’d be keen to hear how they perform for your workloads. Any comparisons, timings, or benchmark figures you’re happy to share would be very welcome.
Full hardware specifications are available at GPU nodes, with usage guidance in Using GPUs on Stanage.
To support the new NVIDIA H100 NVL GPUs hosted on Intel-based nodes, we have updated the icelake software stack to include the same GPU-related packages as the znver3 stack.
A new Blackwell-based GPU node (1× server with 8× RTX 6000 Pro GPUs) has been ordered. It will be some time before this is operational, and support under EL7 is unlikely.
H100 NVL GPU node gpu31 returned to service
Sub-NUMA clustering, an Intel feature on modern processes, was inadvertently enabled on Stanage node gpu31. The node was taken out of service on 23rd December 2025 and the feature disabled.
While the node was drained, we re-ran a small set of NCCL benchmarks. The configuration changes made during maintenance resulted in an improvement of ~18% in basic NCCL tests.
Further testing also showed that moving from CUDA 11.x to CUDA 12.1 / NCCL 2.18.3 provides additional gains, with average NCCL bus bandwidth increasing by ~31–37% and peak bandwidth by ~44–47% in these tests.
The node is now configured consistently with the other H100-NVL nodes and shows a clear performance improvement.
New software installations
We have recently installed the following new software on Stanage:
Icelake
Code_Saturne/9.0.1-foss-2022b (and earlier versions) : Code_Saturne is a general purpose Computational Fluid Dynamics (CFD) software package https://www.code-saturne.org/
FMM3D/1.0.4-foss-2023a : Flatiron Institute Fast Multipole Libraries. https://fmm3d.readthedocs.io
GEM/1.5.1-foss-2022b : Gene-Environment interaction analysis for Millions of samples https://github.com/large-scale-gxe-methods/GEM
gnuplot/5.4.8-GCCcore-12.3.0 :Portable interactive, function plotting utility http://gnuplot.sourceforge.net
GULP/6.3.4-foss-2023a : GULP is a program for performing a variety of types of simulation on materials https://gulp.curtin.edu.au/gulp/
Julia/1.11.5 : Julia is a high-level, high-performance dynamic programming language for numerical computing https://julialang.org
NetLogo/6.4.0-64 : NetLogo is a multi-agent programmable modeling environment https://ccl.northwestern.edu/netlogo/
OpenFOAM/12-foss-2023a : OpenFOAM is a free, open source CFD software package. https://www.openfoam.org/
PETSc/3.19.2-foss-2022b-CUDA-12.1.1 : PETSc The Portable, Extensible Toolkit for Scientific Computation is a toolkit for Scientific Computation. https://www.mcs.anl.gov/petsc
PLUMED/2.9.0-foss-2022b : PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. https://www.plumed.org
ParaView/5.11.2-foss-2023a : ParaView is a scientific parallel visualizer. https://www.paraview.org
RepastHPC-Boost1.73.0/2.3.1-foss-2018b : The Repast Suite is a family of advanced, free, and open source agent-based modeling and simulation platforms https://repast.github.io/
SimNIBS/4.0.1-foss-2023a : SimNIBS is a free and open source software package for the Simulation of Non-invasive Brain Stimulation https://simnibs.github.io/simnibs
Znver3
No new software added
Lustre filesystem usage
This area is currently quite full (71% utilisation). Please can you remove any data from this filesystem that you no longer need by either deleting it or migrating what you want to e.g. a Shared Research Area. Also, please keep in mind that the Lustre filesystem in Stanage should primarily be treated as a temporary file store: it is optimised for performance and has no backups, so any data of value should not be kept on Lustre long-term.
Upcoming Training
Below are our key research computing training dates for the following month. You can register for these courses and more at MyDevelopment .
Warning
For our taught postgraduate users who don’t have access to MyDevelopment, please email us at mailto:researchcomputing@sheffield.ac.uk with the course you want to register for, and we should be able to help you.
13/01/2026 - HPC Training Course.
14/01/2026 - Supervised Machine Learning.
20/01/2026 - Introducing AI into Research.
23/01/2026 - Python Programming 1.
27/01/2026 - Temporal Analysis in Python.
29/01/2026 - Introduction to Linux and Shell Scripting.
30/01/2026 - Python Programming 2.
05/02/2026 - HPC Training Course.
06/02/2026 - Python Programming 3.
12/02/2026 - R programming 1.
19/02/2026 - R programming 2.
The following training sessions are offered by our third-party collaborators:
EPCC (providers of the ARCHER2 HPC service) are running the following training sessions:
22/01/2026 - Data Carpentry
26/02/2026 - Green software use on HPC
Useful Links
RSE code clinics . These are fortnightly support sessions run by the RSE team and IT Services’ Research IT and support team. They are open to anyone at TUOS writing code for research to get help with programming problems and general advice on best practice.
Training and courses (You must be logged into the main university website to view).