On Atlas, we have a large number of storage devices and it is not easy to keep track of all of them. Therefore, here is a list of possible storage paths along with some information about each.

For each of these we will use two variables. $USER refers to the username within Atlas and $NODE refers to the name of a machine, i.e. the generic path /atlas/user/$NODE/$USER could mean /atlas/user/a1234/albert.einstein, i.e. a specific directory named albert.einstein on machine a1234.

Please note, that most parent directories of these file systems are “virtual” ones, this may mean that these directories appear empty until one enters a sub directory. In other words, you may need to know the exact name of the directory to enter beforehand!

writable locations read-only locations
/work/$USER /home/$USER
/atlas/user/$NODE/$USER /atlas/ldr
  /cvmfs
  /opt/

/home/$USER

Availability
head nodes only
Physical storage
Files formerly stored on the HSM
Distributed on three 20Gbit/s connected servers
Usage
read-only data store
data should soon be archived by project or copied to /work
Expected life-time
at most until 2023
Access via web browser
https://www.atlas.aei.uni-hannover.de/home/$USER
for details please refer to User WWW directories

/work/$USER

Availability
head and compute nodes
Physical storage
Around 30 servers
Files stored on dedicated 24 disk storage boxes
Total bandwidth 20Gb/s
about 84 TByte disk storage
Usage
scratch space
input and final output of Condor work flows
Expected life-time
TBD: automatic snapshots, files can be retrieved by users after accidental deletion
Access via web browser
https://www.atlas.aei.uni-hannover.de/work/$USER
for details please refer to User WWW directories
Details
More details can be found the dedicated page for /work

/atlas/user/$NODE/$USER

Availability
across cluster
access to head nodes’ /local/user/$USER directory is not possible
Alias of
/local/user/$USER on machine $NODE
Physical storage
compute node (single disk, ~1 Gbit/s, no redundancy, ~1 TByte)
head node (12-16 disks, ~20 Gbit/s, some redundancy, ~50 TByte)
remote access via NFS, locally via bind-mount
Usage
local scratch space
intermediate files created during condor work flows (compute nodes)
condor log files, git clones for temporary builds (head nodes)
no backup is made for these

/atlas/ldr

Availability
across cluster
Physical storage
37 servers
12 disk drive raidz2
optimized for reading
20 GBit/s per machine
Usage
storage of detector data (mostly LIGO/Virgo)

/opt

Availability
across cluster
Physical storage
single, read-optimized server
20 GBit/s per machine
Usage
central storage area for application software
Mathematics, Matlab
commercial compiler

/cvmfs

Availability
across cluster
Physical storage:
virtual file system
provided by local caching and multiple proxies
Usage
/cvmfs/oasis.opensciencegrid.org/ for example contains software from multiple experiments