software:topical:physics:atlas

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
software:topical:physics:atlas [2019/10/17 21:11]
meesters created
software:topical:physics:atlas [2020/03/19 09:45] (current)
bbrickwe
Line 1: Line 1:
-====== ​Blacklisting ​of sites ======+===== ATLAS on Mogon2 ​===== 
 + 
 +under construction ... 
 + 
 + 
 +====CvmFS==== 
 + 
 + 
 +CvmFS is installed on the user interfaces as well as on the worker nodes. It is a (read only) network file system designed to distribute software from CERN. 
 + 
 +=== Setup === 
 + 
 +Put this in your .bashrc: 
 + 
 +<code bash> 
 +export ATLAS_LOCAL_ROOT_BASE=/​cvmfs/​atlas.cern.ch/​repo/​ATLASLocalRootBase 
 +alias setupATLAS='​source ${ATLAS_LOCAL_ROOT_BASE}/​user/​atlasLocalSetup.sh'​ 
 +</​code>​ 
 + 
 +Then, you can enable the ATLAS environment with: 
 + 
 +<code bash> 
 +setupATLAS 
 +</​code>​ 
 + 
 +==== Mogon2 Gridsite ==== 
 + 
 +=== General remarks === 
 +- You should only store data on Mogon that is related to your work on Mogon. The fileserver is not intended as a backup system.  
 +- We want to reserve miifs02 for the grid site. All your personal (mogon related) data should be stored on /gpfs/fs7 (more details below). 
 + 
 +- Data you want to archive and do not need to access on a regular basis can be stored in the Mogon archive using iRODS 
 + 
 +Introduction:​ https://​mogonwiki.zdv.uni-mainz.de/​dokuwiki/​data_management:​irods 
 + 
 +HowTo: https://​mogonwiki.zdv.uni-mainz.de/​dokuwiki/​archiving:​preparation 
 + 
 +=== Request samples === 
 + 
 +Use rucio to store samples on our grid site (using https://​rucio-ui.cern.ch/​r2d2/​request) instead ​of downloading them to a local folder. This way users can share datasets. And data not needed anymore will be removed after the lifetime you can define there. For all rucio operations, you have to call: 
 +<code bash> 
 +lsetup rucio 
 +voms-proxy-init -voms atlas -hours HOUR 
 +</​code>​ 
 +to initialize a voms for HOUR hours. 
 + 
 +Once you stored a DID on the grid site you can find the corresponding files using: 
 +<code bash> 
 +rucio list-file-replicas DID | grep MAINZ | sed "​s|^.*MAINZ|MAINZ|"​ | awk '​{print $2}' | cut -d "=" -f2  | sed "​s|^|/​lustre/​miifs02/​storm|"​ 
 +</​code>​ 
 + 
 +=== Upload samples === 
 +  
 +You can store your results of your analysis on our grid site using rucio upload instead of copying it to the scratch space by: 
 +<code bash> 
 +rucio upload --rse MAINZ_LOCALGROUPDISK --register-after-upload —lifetime 15552000 —name NAME FILE 
 +</​code>​ 
 +Alternatively,​ you can perform the same for a group of files, e.g.: 
 +<code bash> 
 +rucio upload --rse MAINZ_LOCALGROUPDISK --register-after-upload user.dta:​Embedding_DAODs folder/​files_in_folder.*.root 
 +</​code>​ 
 +"​--register-after-upload"​ registers the file in rucio only after successful upload, especially important when uploading large datasets. Just adjust the username (dta in this case) and the files to create a group. These files can be found via: 
 +<code bash> 
 +rucio list-dataset-replicas user.dta:​Embedding_DAODs 
 +</​code>​ 
 + 
 +==== Blacklisting of sites ====
  
 **Your action:** You have to blacklist the sites in the table below for all GRID actions (DaTRI, pathena, prun, dq2) ! **Your action:** You have to blacklist the sites in the table below for all GRID actions (DaTRI, pathena, prun, dq2) !
Line 10: Line 76:
 In detail, these have to be blacklisted:​ In detail, these have to be blacklisted:​
  
-===== Australia-ATLAS ​=====+=== Australia-ATLAS ===
  
-==== DDM Endpoints ​====+== DDM Endpoints ==
  
   * AUSTRALIA-ATLAS_DATADISK   * AUSTRALIA-ATLAS_DATADISK
Line 23: Line 89:
   * AUSTRALIA-ATLAS_T2ATLASLOCALGROUPDISK   * AUSTRALIA-ATLAS_T2ATLASLOCALGROUPDISK
  
-==== PANDA Australia-ATLAS ​====+== PANDA Australia-ATLAS ==
  
   * ANALY_AUSTRALIA   * ANALY_AUSTRALIA
Line 33: Line 99:
  
  
-===== TRIUMF-LCG2 ​=====+=== TRIUMF-LCG2 ===
  
-==== DDM Endpoints ​====+== DDM Endpoints ==
  
   * TRIUMF-LCG2-MWTEST_DATADISK   * TRIUMF-LCG2-MWTEST_DATADISK
Line 51: Line 117:
   * TRIUMF-LCG2_SOFT-TEST   * TRIUMF-LCG2_SOFT-TEST
  
-==== PANDA: TRIUMF ​====+== PANDA: TRIUMF ==
  
   * ANALY_TEST   * ANALY_TEST
Line 64: Line 130:
   * TRIUMF_VIRTUAL   * TRIUMF_VIRTUAL
  
-====== Dataset management ====== +For the most GRID actions (pathena, prun, dq2) it is sufficient to add these parameters:
- +
-===== pathena, prun, dq2-get, ... ===== +
- +
-But for the most GRID actions (pathena, prun, dq2) it is sufficient to add these parameters:+
  
 ''​pathena --excludedSite=ANALY_TRIUMF,​ANALY_AUSTRALIA''​ ''​pathena --excludedSite=ANALY_TRIUMF,​ANALY_AUSTRALIA''​
Line 78: Line 140:
 DaTRI requests (on panda web interface) will inform you (with green text at the bottom of the request summary before you submit) that it will not work. If this occurs, please do not submit the request! It might in the end lead to an exclusion of our Mainz site from the grid! (It is causing big trouble in the system) DaTRI requests (on panda web interface) will inform you (with green text at the bottom of the request summary before you submit) that it will not work. If this occurs, please do not submit the request! It might in the end lead to an exclusion of our Mainz site from the grid! (It is causing big trouble in the system)
  
-===== Transfer to FZK =====+==== Transfer to FZK ====
  
 If your datasets are only at one of these sites, please request a replica (DaTRI user request in PANDA web interface) to Karlsruhe **FZK-LCG2_SCRATCHDISK**. When the replica is complete the exclusion should work. If your datasets are only at one of these sites, please request a replica (DaTRI user request in PANDA web interface) to Karlsruhe **FZK-LCG2_SCRATCHDISK**. When the replica is complete the exclusion should work.
  
-===== Cancellation of data transfers ​=====+=== Cancellation of data transfers ===
  
 Firstly, you have to identify the dataset'​s name. Go to [[http://​panda.cern.ch/​server/​pandamon/​query?​mode=ddm_pathenareq&​action=List|Panda]] and fill in the "Data Pattern"​ with the name of the dataset (e.g., user.tlin*). Choose "​Request status"​ as "​transfer"​ and click the button "​list"​ to get all your dataset which are transferring now. Firstly, you have to identify the dataset'​s name. Go to [[http://​panda.cern.ch/​server/​pandamon/​query?​mode=ddm_pathenareq&​action=List|Panda]] and fill in the "Data Pattern"​ with the name of the dataset (e.g., user.tlin*). Choose "​Request status"​ as "​transfer"​ and click the button "​list"​ to get all your dataset which are transferring now.
Line 89: Line 151:
 You can check the status again like detailed in the first step. It should be have the status "​stopped"​. You can check the status again like detailed in the first step. It should be have the status "​stopped"​.
  
-====== CvmFS ====== +==== Monitoring ====
- +
- +
-CvmFS is installed on the user interfaces as well as on the worker nodes. It is a (read only) network file system designed to distribute software from CERN. +
- +
-===== Setup ===== +
- +
-Put this in your .bashrc: +
- +
-<code bash> +
-export ATLAS_LOCAL_ROOT_BASE=/​cvmfs/​atlas.cern.ch/​repo/​ATLASLocalRootBase +
-alias setupATLAS='​source ${ATLAS_LOCAL_ROOT_BASE}/​user/​atlasLocalSetup.sh'​ +
-</​code>​ +
- +
-Then, you can enable the ATLAS environment with: +
- +
-<code bash> +
-setupATLAS +
-</​code>​ +
- +
-===== Root ===== +
- +
-In order to make root-scripts run properly, use have to use (in addition to the standard setup): +
-<code bash> +
-localSetupROOT +
-localSetupGcc --gccVersion=gcc462_x86_64_slc6 +
-</​code>​ +
- +
-====== Monitoring ​======+
 Some links to check the status of ''​mainz''​. Some links to check the status of ''​mainz''​.
 ===== Grid ===== ===== Grid =====
  • software/topical/physics/atlas.1571339506.txt.gz
  • Last modified: 2019/10/17 21:11
  • by meesters