Attention

The ShARC HPC cluster was decommissioned on the 30th of November 2023 at 17:00. It is no longer possible for users to access that cluster.

ShARC specifications

Total capacity

  • Worker nodes: 121.

  • CPU cores: 2024.

  • Total memory: 12160 GiB.

  • GPUs: 40.

  • Fast network filesystem (Lustre): 669 TiB.

Note that some of these resources have been purchased by research groups who have exclusive access to them.

General CPU node specifications

98 nodes are publicly available (not exclusive to research groups).

  • Machine: Dell PowerEdge C6320.

  • CPUs: 2 x Intel Xeon E5-2630 v3:

    • Haswell processor microarchitecture;

    • 2.40 GHz;

    • Support for AVX2 vectorisation instructions (simultaneously apply the same operation to multiple values in hardware);

    • Support for Fused Multiply-Add instructions (expedites operations involving the accumulation of products e.g. matrix multiplication);

    • Hyperthreading is disabled on all nodes bar four that are reserved for interactive jobs.

  • RAM: 64 GB (i.e. 4 GiB / core):

    • 1866 MHz;

    • DDR4.

  • Local storage: 1 TiB SATA III HDD:

    • /scratch: 836 GiB of temporary storage;

    • /tmp: 16 GiB of temporary storage.

Large memory node specifications

Ten nodes are publicly available (not exclusive to research groups).

These are similar to the general CPU nodes but with some differences in terms of CPU model, CPU core count and total RAM:

  • 2x nodes with Intel Xeon E5-2630 v3 CPU (2.40GHz), 16 CPU cores total (8 per socket) and 256 GB RAM.

  • 7x nodes with 2x Intel Xeon E5-2640 v4 CPU (2.40GHz) processors, 20 CPU cores total (10 per socket) and 256 GB RAM.

  • 1x node with 2x Intel Gold 5120 CPU (2.20GHz) processors, 28 CPU cores total (14 per socket) and 384 GB RAM.

GPU node specifications

Two nodes are publicly available (not exclusive to research groups):

  • Machine: Dell PowerEdge C4130.

  • CPUs: 2 x Intel Xeon E5-2630 v3 (2.40GHz).

  • RAM: 64 GB (i.e. 4 GiB / core); 1866 MHz; DDR4.

  • Local storage: 800 GiB SATA SSD.

  • GPUs: 8 x NVIDIA Tesla K80 (4x dual-GPU accelerators):

    • 12 GiB of GDDR5 memory per GPU (24 GiB per accelerator; 96 GiB per node).

    • Up to 1.46 Teraflops of double precision performance with NVIDIA GPU Boost per GPU (2.91 TFLOPS per accelerator).

    • Up to 4.37 Teraflops of single precision performance with NVIDIA GPU Boost per GPU (8.74 TFLOPS per accelerator).

Hardware-accellerated visualisation nodes

One node is publicly available:

  • Machine: Dell Precision Rack 7910.

  • CPUs: 2 x Intel Xeon E5-2630 v3 (2.40GHz).

  • RAM: 128 GiB (i.e. 8 GiB / core); 1866 MHz; DDR4.

  • Local storage: 1 TiB.

  • Graphics cards: 2x Quadro K4200:

    • Memory: 4 GiB GDDR5 SDRAM.

Networking

  • Intel OmniPath Architecture (OPA) (100 Gb/s) to all public nodes.

  • Gigabit Ethernet.

Operating System and software

  • OS: CentOS 7.x (binary compatible with RedHat Enterprise Linux 7.x) on all nodes.

  • Interactive and batch job scheduling software: Son of Grid Engine.

  • Many applications, compilers, libraries and parallel processing tools. See Software on ShARC.

Non-worker nodes

  • Two login nodes (for resilience).

  • Other nodes to provide:

    • Lustre parallel filesystem.

    • Son of Grid Engine scheduler ‘head’ nodes.

    • Directory services.