Filestores

Every HPC user has access to up to five different storage areas:

The storage areas differ in terms of:

  • the amount of space available;

  • whether they are available from multiple nodes;

  • whether they are shared between clusters;

  • whether the underlying storage system is performant if reading/writing large files;

  • whether the underlying storage system is performant if reading/writing small files;

  • frequency of storage snapshotting, whether storage is mirrored and the maximum duration data can be retained for;

  • whether they handle permissions like a typical Linux filesystem.

At present none provide encryption at rest.


Choosing the correct filestore

To make a quick assessment of what storage area is likely to best fulfil your needs, please take a look at the provided decision tree below:

Warning

This decision tree only provides a quick assessment, please check the full details of each filestore before committing to using them for your work.

Stanage
Bessemer
Which cluster?
Sheffield HPC Cluster Storage Selection decision tree:
Does your job read or write lots of small files?
Does your job read or write lots of small files?
/scratch
Does your job read or write large or small files?
Does your job read or write large or small files?
/fastdata
Are you running a job or storing input / output data?
Are you running a job or storing input / output data?
Running a job
Running a job
Yes
Yes
Large
Large
No
No
Do you need to share your data with other users?
Storing data
/home
/home
Small
Small
Research shared storage area
/home
No
Yes
Do you need to share your data with other users?
Research shared storage area
Storing data
No
Yes
Do you need a larger quota than 50GB?
Do you need a larger quota than 100GB?
No
/home
No

Home directories

All users have a home directory on each system:

Home filestore area details

Path

Type

Quota per user

Shared between system login and worker nodes?

Shared between systems?

/users/$USER

NFS

50 GB or 300000 files

Yes

No

Where $USER is the user’s username.

See also: How to check your quota usage and * If you exceed your filesystem quota.

Home filestore backups and snapshots details

Warning

Snapshotting is not enabled for home areas and these areas are not backed up.

Note

As you can see in the above tabs the full path to your home directory is different depending on the cluster you are on:

Cluster

Path

Stanage

/users/$USER

Bessemer

/home/$USER

To ensure that your code is compatible with both clusters, we suggest using the symbols “~” or “$HOME” to represent the home directory. This approach ensures that the correct path is used regardless of the cluster you are working on, making your code more portable and agnostic to the specific cluster environment.

$ echo $HOME
/users/te1st
$ echo ~
/users/te1st

Fastdata areas

Fastdata areas are optimised for large file operations. These areas are Lustre filesystems.

They are faster than Home directories and Shared (project) directories when dealing with larger files but are not performant when reading/writing lots of small files (Scratch directories are ideal for reading/writing lots of small temporary files within jobs). An example of how slow it can be for large numbers of small files is detailed here.

There are separate fastdata areas on each cluster:

Fastdata filestore area details

Path

Type

Quota per user

Filesystem capacity

Shared between systems?

Network bandwith per link

/mnt/parscratch/

Lustre

No limits

2 PiB

No

100Gb/s (Omni-Path)

Managing your files in fastdata areas

We recommend users create their own personal folder in the /fastdata area. As this doesn’t exist by default, you can create it with safe permissions by running the command:

mkdir /mnt/parscratch/users/$USER
chmod 700 /mnt/parscratch/users/$USER

By running the command above, your area will only be accessible to you. If desired, you could have a more sophisticated sharing scheme with private and fully public directories:

mkdir /mnt/parscratch/users/$USER
mkdir /mnt/parscratch/users/$USER/public
mkdir /mnt/parscratch/users/$USER/private

chmod 755 /mnt/parscratch/users/$USER
chmod 755 /mnt/parscratch/users/$USER/public
chmod 700 /mnt/parscratch/users/$USER/private

Note however that the public folder in this instance will be readable to all users!

Fastdata filestore backups and snapshots details

Warning

Snapshotting is not enabled for home areas and these areas are not backed up.

File locking

As of September 2020 POSIX file locking is enabled on all Lustre filesystems. Prior to this the lack of file locking support on the University’s Lustre filesystems caused problems for certain workflows/applications (e.g. for programs that create/use SQLite databases).

User Quota management

Warning

There are no automated quota controls in the Stanage fastdata areas and the Stanage fastdata area currently has no automatic file deletion process.

We reserve the right to prevent unfair use of this area by users and will manually assess user’s usage and establish a dialogue with users who are using unfair amounts of this area on a regular basis.

We also reserve the right to take measures to ensure the continuing functionality of this area which could include scheduled removal of user’s files (after informing the user of the scheduled removal).


Shared (project) directories

Each PI at the University is entitled to request a free 10 TB storage area for sharing data with their group and collaborators. The capacity per area can be extended and additional shared areas can be purchased (both at a cost).

After one of these project storage areas has been requested/purchased it can be accessed in two ways:

  • as a Windows-style (SMB) file share on machines other than Stanage and Bessemer using \\uosfstore.shef.ac.uk\shared\;

  • as a subdirectory of /shared on Stanage and Bessemer (you need to explicitly request HPC access when you order storage from IT Services).

Danger

The High Performance Computing service must not be used to store or process any restricted or sensitive data. Shared areas with this type of data are not permitted to be made available on the clusters.

Research shared storage areas may already contain sensitive data, or may contain sensitive data stored by colleagues you are not aware of. Before requesting an area to be made available on the HPC clusters, you must ensure that they do not and will not contain any sensitive data for the life time of the area.

Snapshotting and mirrored backups

Frequency of snapshotting

Snapshots retained

Backed up onto separate storage system

Every 4 hours

10 most recent

Yes

Every night

Last 7 days

Yes

See also: Recovering files from snapshots.

Automounting

Subdirectories beneath /shared are mounted on demand on the HPC systems: they may not be visible if you simply list the contents of the /shared directory but will be accessible if you cd (change directory) into a subdirectory e.g. cd /shared/my_group_file_share1.

Specifics for each Cluster

As our HPC cluster are each hosted in different datacentres the policy, configuration and accessibility of the shared areas varies. The infomation for each cluster is shown below:

Shared research area mount availability

On the Stanage cluster, shared research areas can be made available on all login nodes only, upon request. This is because:

  • The HPC nodes are hosted within a datacentre in Manchester distant from the shared research area filestores hosted within the University’s Sheffield datacentres.

  • Network traffic between Stanage and the Sheffield Research Filestore is not encrypted when travelling between Sheffield and Manchester over the dedicated leased line network link.

  • The leased line network link has 10Gb/s of bidirectional transfer available.

Shared research area performance

  • If you access a /shared directory stored in Sheffield from Stanage then you may experience slower performance, especially for small files.

  • The comparatively slower network link for Stanage than Bessemer could result in very poor performance if mounted on all worker nodes. This is why shared research areas are only made available on login nodes.

  • Stanage does not have a local shared research area filestore, thus no local shared research areas can be made.

If you need to access a /shared area on Stanage please contact research-it@sheffield.ac.uk to arrange this.

Permissions behaviour

You may encounter strange permissions issues when running programs on HPC against the /shared areas e.g. chmod +x /shared/mygroup1/myprogram.sh fails. Here we try to explain why.

Behind the scenes, the file server that provides this shared storage manages permissions using Windows-style ACLs (which can be set by area owners via the Research Storage management web interface. However, the filesystem is mounted on a Linux cluster using NFSv4 so the file server therefore requires a means for mapping Windows-style permissions to Linux ones. An effect of this is that the Linux mode bits for files/directories under /shared on the HPC systems are not always to be believed: the output of ls -l somefile.sh may indicate that a file is readable/writable/executable when the ACLs are what really determine access permissions. Most applications have robust ways of checking for properties such as executability but some applications can cause problems when accessing files/directories on /shared by naively checking permissions just using Linux mode bits:

  • which: a directory under /shared may be on your path and you may be able to run a contained executable without prefixing it with a absolute/relative directory but which may fail to find that executable.

  • Perl: scripts that check for executability of files on /shared using -x may fail unless Perl is explicitly told to test for file permissions in a more thorough way (see the mention of use filetest 'access' here).

  • git: may complain that permissions have changed if a repository is simply moved to /shared/someplace from elsewhere on Stanage/Bessemer. As a workaround you can tell git to not to track Linux permissions for a single repository using git config core.filemode false or for all repositories using git config --global core.filemode false.

Changing how attempts to change permissions are handled: each /shared area can be configured so that

  1. Attempts to change file/directory mode bits fail (e.g. chmod +x /shared/mygroup1/myprogram.sh fails) (default configuration per area) or

  2. Attempts to change file/directory mode bits appear to succeed (e.g. chmod +x /shared/mygroup1/myprogram.sh does not fail but also does not actually change any permissions on the underlying file server) (alternative configuration per area)

If you would like to switch to using the second way of handling permissions for a particular /shared/ area then the Owner of this area should make a request via the Helpdesk.

Further information

The documentation for the /shared storage service includes information on:


Scratch directories

For jobs that need to read/write lots of small files the most performant storage will be the temporary storage on each node.

This is because with Home directories, Fastdata areas and Shared (project) directories, each time a file is accessed the filesystem needs to request ownership/permissions information from another server and for small files these overheads are proportionally high.

For the local temporary store, such ownership/permissions metadata is available on the local machine, thus it is faster when dealing with small files.

As the local temporary storage areas are node-local storage and files/folders are deleted when jobs end:

  • any data used by the job must be copied to the local temporary store when the jobs starts.

  • any output data stored in the local temporary store must also be copied off to another area before the job finishes. (e.g. to Home directories).

Further conditions also apply:

  • Anything in the local temporary store area may be deleted periodically when the worker-node is idle.

  • The local temporary store area is not backed up.

  • There are no quotas for local temporary store storage.

  • The local temporary store area uses the ext4 filesystem.

Danger

The local temporary store areas are temporary and have no backups. If you forget to copy your output data out of the local temporary store area before your job finishes, your data cannot be recovered!

Specifics for each Cluster

The scheduler will automatically create a per-job directory for you under /tmp. The name of this directory is stored in the $TMPDIR environment variable e.g.

[te1st@login1 [stanage] ~]$   srun -c 1 --mem=4G --pty bash -i
[te1st@node001 [stanage] ~]$  cd $TMPDIR
[te1st@node001 [stanage] ~]$  pwd
/tmp/job.2660172

The scheduler will then clean up (delete) $TMPDIR at the end of your job, ensuring that the space can be used by other users.


Community areas for software

Most data that researchers want to share with their collaborators at the University should reside in Shared (project) directories. However, as mentioned in Permissions behaviour, these areas may not be ideal for storing executable software/scripts due to the way permissions are handled beneath /shared.

Also, users may want to install software on the clusters that they want to be accessible by all cluster users.

To address these two needs users can request the creation of a new directory beneath of the three directories listed below and if their request is granted they will be given write access to this area:

System

Path

Type

Software install guidelines

Public index of areas

Notes

Stanage

N/A

N/A

Bessemer

/usr/local/community

NFS

Note that:

  • Software installation should follow our installation guidelines where provided.

  • Software installations must be maintained by a responsible owner.

  • Software which is not actively maintained may be removed.


How to check your quota usage

To find out your storage quota usage for your home directory you can use the quota command:

[te1st@login1 [stanage] ~]$ quota -u -s
    Filesystem   space   quota   limit   grace   files   quota   limit   grace
storage:/export/users
                 3289M  51200M  76800M            321k*   300k    350k   none

An asterisk (*) after your space or files usage indicates that you’ve exceeded a ‘soft quota’. You’re then given a grace period of several days to reduce your usage below this limit. Failing to do so will prevent you from using additional space or creating new files. Additionally, there is a hard limit for space and files that can never be exceeded, even temporarily (i.e. it has no grace period).

In the above example we can see that the user has exceeded their soft quota for files (‘*’) but not their hard limit for files. However, the grace period field reads ‘none’, which means the grace period for exceeding the soft quota has already expired. The user must remove/move some files from their home directory before they can create/add any more files. Also, the user is a long way from exceeding their space soft quota.

Tip

To assess what is using up your quota within a given directory, you can make use of the ncdu module on Stanage. The ncdu utility will give you an interactive display of what files/folders are taking up storage in a given directory tree.


If you exceed your filesystem quota

If you reach your quota for your home directory then many common programs/commands may cease to work as expected or at all and you may not be able to log in.

In addition, jobs may fail if you exceed your quota with a job making use of a Shared (project) directory.

In order to avoid this situation it is strongly recommended that you:


Recovering files from snapshots

Recovery of files and folders on Stanage is not possible as the Stanage cluster does not currently have snapshots or backups.

If you need help, please contact research-it@sheffield.ac.uk.