Description
Singularity enables users to have full control of their environment. This means that a non-privileged user can "swap out" the operating system on the host for one they control. So if the host system is running RHEL7 but your application runs in Ubuntu, you can create an Ubuntu image, install your applications into that image, copy the image to another host, and run your application on that host in it's native Ubuntu environment!
Home page
https://sylabs.io/singularity/
Documentation
https://sylabs.io/guides/3.7/user-guide/
License
Singularity is released under the 3-clause BSD license. (See the LICENSE file for more details.)
Usage
Use on submit nodes
Use on Colossus
On your project's submit node, run
module avail singularity
to see which versions of Singularity are available in the module. Use
module load singularity/version
in your job script to make Singularity available. To run a Singularity container in a job script, use
singularity exec [exec options...]<container-path> <command>
Use
singularity --help
to get help.
Binding filesystems
On Colossus the GPFS filesystem is mounted under /gpfs with symlinks from /cluster to mirror the setup on the submit hosts. By default, Singularity will bind $HOME, /gpfs and /cluster in Singularity images that run on the compute nodes (what you submit via a job script) to allow reading from and writing to /cluster and $HOME:
Filesystem Size Used Avail Use% Mounted on overlay 16M 12K 16M 1% / devtmpfs 252G 0 252G 0% /dev tmpfs 252G 0 252G 0% /dev/shm /dev/mapper/system-root 219G 3.4G 216G 2% /gpfs colossus01 200T 3.0T 198T 2% /cluster tmpfs 16M 12K 16M 1% /cluster/software/singularity/3.7.1/var/singularity/mnt/session /dev/loop0 60M 60M 0 100% /cluster/software/singularity/3.7.1/var/singularity/mnt/session/rootfs projects02 2.8P 1.6P 1.2P 58% /gpfs/projects02 projects01 1.4P 944T 457T 68% /gpfs/projects01 tsd-evs.colossus:/p11home/p11-bartt 300G 261G 40G 87% /tsd/p11/home/p11-bartt
To specify additional bind paths use the --bind argument:
singularity exec --bind /gpfs,/cluster<container-path> <command>
Building Singularity images
Due to a lack of admin privileges and internet access, TSD does not support building Singularity containers inside TSD. Singularity images must be built outside TSD and then imported into TSD. Please reference the Singularity installation documentation for to install on Linux, Mac or Windows and how to build containers.
Building containers from Singularity definition files (outside TSD)
- Create a bootstrap definition file (see examples):
Bootstrap: docker From: ubuntu:16.04 %post apt-get -y update apt-get -y install fortune cowsay lolcat %environment export LC_ALL=C export PATH=/usr/games:$PATH %runscript fortune | cowsay | lolcat
- Then build the image using the following command:
sudo singularity build lolcow.sif lolcow.def
Converted Docker to singularity containers (outside TSD)
Singularity can directly fetch and build containers using Docker/OCI container images from a registry.
Example building a Singularity Image Format container from Docker Hub's alpine:3 image:
singularity build alpine.sif docker://docker.io/alpine:3
Please reference the documentation on Singularity's interoperability with Docker.
GPU containers
To run GPU enabled containers on the GPU nodes, pass the --nv argument:
singularity exec --nv<container-path> <command>
Withdrawn support for some Singularity versions
There has been security issues with some versions of Singularity. For this reason, the following versions are not available:
- 2.3.1
- 2.3.2
Images build for these versions of Singularity should run fine with other versions of Singularity. Please contact us if you experience problems with this.