On the 21st of October 2021 we will be deploying multi-factor authentication on the HPC login nodes (ShARC & Bessemer) for all password logins.

For more information please see our page on this change.

Bessemer specifications

Total capacity

  • Worker nodes: 26

  • CPU cores: 1,040

  • Total memory: 5,184 GiB

  • GPUs: 4

  • Fast network filesystem (Lustre): 460 TiB

Note that some of these resources have been purchased by research groups who have exclusive access to them.

General CPU node specifications

25 nodes are publicly available (not exclusive to research groups).

  • Machine: Dell PowerEdge C6420

  • CPUs: 2 x Intel Xeon Gold 6138

    • Skylake processor microarchitecture;

    • 2.00 GHz;

    • Support for AVX-512 vectorisation instructions (simultaneously apply the same operation to multiple values in hardware);

    • Support for Fused Multiply-Add instructions (expedites operations involving the accummulation of products e.g. matrix multiplication).

    • Hyperthreading is disabled on all nodes.

  • RAM: 192 GB (i.e. 4.8 GiB / core)

    • 2666 MHz;

    • DDR4.

  • Local storage: 1 TiB SATA III HDD

    • /scratch: 836 GiB of temporary storage;

    • /tmp: 16 GiB of temporary storage.

GPU node specifications

One node is publicly available (not exclusive to research groups):

  • Machine: Dell PowerEdge C4140

  • CPUs: 2 x Intel Xeon Gold 6138 (2.00GHz)

  • RAM: 384 GB (i.e. 9.6 GiB / core); 2666 MHz; DDR4

  • Local storage: 220 GiB SATA SSD

  • GPUs: 4 x NVIDIA Tesla V100

    • 32 GiB of GDDR5 memory


  • 25 Gigabit Ethernet

Operating System and software

  • OS: Centos 7.x (binary compatible with RedHat Enterprise Linux 7.x) on all nodes

  • Interactive and batch job scheduling software: Slurm

  • Many applications, compilers, libraries and parallel processing tools. See Software on Bessemer

Non-worker nodes

  • Two login nodes (for resilience)

  • Other nodes to provide:

    • Lustre parallel filesystem

    • Slurm scheduler ‘head’ nodes