You most probably is going to work on an LXPLUS cluster -- each time you log onto lxplus it usually gives you another machine to use. For example, here, I got onto lxplus069
at my first login and then lxplus026
after I relogged:
[user@home ~]$ ssh user@lxplus.cern.ch
Welcome to lxplus069.cern.ch, SLC, 6.9
[user@lxplus069 ~]$ exit
Connection to lxplus.cern.ch closed.
[user@home ~]$ ssh user@lxplus.cern.ch
Welcome to lxplus026.cern.ch, SLC, 6.9
[user@lxplus026 ~]$ _
On LXPLUS your home folder is stored in the AFS file system: /afs/cern.ch/user/${USER:0:1}/$USER/
AFS is a "distributed, location-transparent file system" , meaning that you'll see the same files in your home folder on different LXPLUS machines.
You can have up to 10Gb in your AFS home folder -- to increase your quota go to
https://resources.web.cern.ch/resources/Manage/AFS/Settings.aspx
You also can ask for more "work space" on AFS by clicking "Create AFS Workspace"
It'll give you another /afs/cern.ch/work/
with a larger quota limit (~100 Gb )
On each LXPLUS machine you can also use a local /tmp
folder (~100 Gb) to store your temporary files. Keep in mind, though, that the machine can be rebooted at any moment and the /tmp
folder gets cleaned after that.
AFS is great for storing and working on your code, ROOT histograms, e.t.c.
For large datasets you shall use EOS filesystem -- it is slower but more voluminous than AFS (several Tb).
Your personal EOS folder shall be at /eos/ams/user/${USER:0:1}/$USER/
-- if it is not there, you shall ask Sasha (Alexandre.Eline@cern.ch) to create one for you.
AMS-02 rootples for data and Monte-Calro are also stored on the EOS.
/eos/ams/Data/AMS02/2014/ISS.B950/pass6/
-- is for the latest data rootples/eos/ams/MC/AMS02/2014/
-- has various MonteCarlosBe careful when listing the Data folder above -- lots of files and EOS is rather slow.
I'd suggest doing something like ls /eos/ams/Data/AMS02/2014/ISS.B950/pass6/14028*
By default, when you log on to LXPLUS you got some quite outdated development tools: g++ 4.4, python 2.6 and no ROOT.
You can get the most-recent CERN-approved development tools at /cvmfs/sft.cern.ch/lcg/views/
There you'll need to source a file in <release>/<platform>/setup.[c]sh
E.g. with this command:
source /cvmfs/sft.cern.ch/lcg/views/LCG_91/x86_64-slc6-gcc7-opt/setup.sh
You'll get g++ 7.1
, python 2.7.13
with latest scientific packages, CMake
, ROOT 6.10
, Geant4/10.03
, e.t.c
For more details check here:
https://indico.cern.ch/event/479888/contributions/1999187/attachments/1217382/1778496/LCG-Views-20160126.pdf
The AMS-02 specific environment uses custom-compiled ROOT and Geant. The whole environment resides at /cvmfs/ams.cern.ch/Offline/
. Here you'll see the latest stable release B900_patches
and the current development version vdev
(might occasionally be broken) of the AMS Software.
To compile the code that is able to access the AMS rootples I usually use the following environment:
export AMSWD=/cvmfs/ams.cern.ch/Offline/vdev
export AMSDataDir=/cvmfs/ams.cern.ch/Offline/AMSDataDir
source /cvmfs/ams.cern.ch/Offline/root/Linux/root-v5-34-9-gcc64-slc6/bin/thisroot.sh
export AMSLIBso=$AMSWD/lib/linuxx8664gcc5.34/ntuple_slc6_PG.so
source /cvmfs/sft.cern.ch/lcg/external/gcc/4.9.1/x86_64-slc6/setup.sh
It is in principle possible to start a long-running task on an lxplus instance inside screen
-- but, as I mentioned, the machines get occasionally rebooted. If you need to complete a large task -- you shall use LSF "bsub
" jobs submission. You'll find alle the standard LSF commands there:
bqueues
-- check the status of the job queuesbsub -q <queue> commmand
-- runs the command on the LSF in a queue <queue>
bjobs
-- get status of the running jobsbkill
-- kill a jobFor <queue>
you can select one of the standard queues:
There are also several AMS queues -- I'm not really familiar with the specifics of those.
It looks like CERN is currently migrating from LSF to HTCondor.
As a result there is no good documentation on either...
Still, you can find a brief LSF description here:
https://twiki.cern.ch/twiki/bin/view/Main/BatchJobs#InFo
And HTCondor docs are here:
http://batchdocs.web.cern.ch/batchdocs/index.html