Parallel Computing

Parallel Computing uses more than one core. A core (also called processor) is capable of executing one thread of computation.

Modern computers contain more than one core; a typical laptop usually contains either 2 or 4. Hyper-threading is a way of excuting 2 (typically) threads on one core, it is enabled on most laptop-class cores, but is disabled on most HPC clusters. Hyper-threading is disabled on most nodes on ShARC.

Computer clusters such as ShARC contain many hundreds of cores and the key to making your research code faster is to distribute your work across them. If your program is designed to run on only one core, running it on an HPC cluster without modification will not make it any faster (it may even be slower!). Learning how to use parallelisation technologies is vital.

This section explains how to use the most common parallelisation technologies on our systems.

A CPU contains 1 or more cores. A node is what most people think of as “a computer”. The public nodes on ShARC have 2 CPUs and each CPU has 8 cores; and so a (public) node has 16 cores. Computations running on cores on the same node can share memory.

Code that runs on multiple cores may require that the cores are all on the same node or may not; additionally it may require that the code runs simultaneously on multiple cores, or not. This gives rise to a number of ways to use multiple cores:

If you are using a standardised piece of software designed to run on HPC, for example CASTEP or GROMACS, it may well come with its own opinions about the best parallel setup to use. Consult the software documentation.

ShARC/SGE Parallel Environments

The available SGE parallel environments (for ShARC only) can be found below:

ShARC SGE Parallel Environments Table

Parallel Environment Name <env>

Parallel Environment description

smp

Symmetric multiprocessing or ‘Shared Memory Parallel’ environment. Limited to a single node and therefore 16 cores on a normal ShARC node.

openmp

A ‘Shared Memory Parallel’ environment supporting OpenMP execution. Limited to a single node and therefore 16 cores on a normal ShARC node.

mpi

Message Passing interface. Can use as many nodes or cores as desired.

mpi-rsh

The same as the mpi parallel environment but configured to use RSH instead of SSH for certain software like ANSYS.

Other parallel environments not mentioned do exist for specific purposes. Those who require these will be informed directly or via signposting in other documentation.

A current list of environments on ShARC can be generated using the qconf -spl command.


Getting help

If you need advice on how to parallelise your workflow, please contact IT Services or the Research Software Engineering team.