ShARC will be decommissioned on the 30th of November 2023, after which time users will no longer be able to access that cluster and any jobs running or queueing at that time will be cancelled.
Please see our page of important info about ShARC’s decommissioning.
Worker nodes: 26.
CPU cores: 1,040.
Total memory: 5,184 GiB.
Fast network filesystem (Lustre): 460 TiB.
Note that some of these resources have been purchased by research groups who have exclusive access to them.
General CPU node specifications
25 nodes are publicly available (not exclusive to research groups).
Machine: Dell PowerEdge C6420.
CPUs: 2 x Intel Xeon Gold 6138:
Skylake processor microarchitecture;
Support for AVX-512 vectorisation instructions (simultaneously apply the same operation to multiple values in hardware);
Support for Fused Multiply-Add instructions (expedites operations involving the accumulation of products e.g. matrix multiplication);
Hyperthreading is disabled on all nodes.
RAM: 192 GB (i.e. 4.8 GiB / core):
Local storage: 1 TiB SATA III HDD:
/scratch: 836 GiB of temporary storage;
/tmp: 16 GiB of temporary storage.
GPU node specifications
One node is publicly available (not exclusive to research groups):
25 Gigabit Ethernet.
Operating System and software
OS: CentOS 7.x (binary compatible with RedHat Enterprise Linux 7.x) on all nodes.
Interactive and batch job scheduling software: Slurm.
Many applications, compilers, libraries and parallel processing tools. See Software on Bessemer.
Two login nodes (for resilience).
Other nodes to provide:
Lustre parallel filesystem.
Slurm scheduler ‘head’ nodes.