namd

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
namd [2014/06/17 14:59]
noskov
namd [2018/08/18 06:12] (current)
meesters
Line 1: Line 1:
-===== High-performance simulation of large bimolecular systems - NAMD =====+====== Using VMD and NAMD on Mogon ======
  
 +===== Licensing Issues =====
  
-NAMDrecipient of a 2002 Gordon Bell Award, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD is file-compatible with AMBER, CHARMM, and X-PLOR and is distributed free of charge with source code. +Wethe ZDV HPC groupwill setup and compile [[http://​www.ks.uiuc.edu/​Research/​vmd/​|VMD]] ​and [[http://​www.ks.uiuc.edu/​Research/​namd/​|NAMD]] for you -- [[https://​hpc.uni-mainz.de/​high-performance-computing/​service-angebot/​softwareinstallation/​|upon request]]. You will find the installed modules as
-Documentation ​and source code on the web page of "[[http://​www.ks.uiuc.edu/​Research/​namd/​|Theoretical and computational biophysics group]]"+
  
-There are few possibilities to start NAMD simulation on the MOGON cluster.+<code bash> 
 +vis/​VMD/<​version_string>​ 
 +chem/NAMD/<​version_string>​ 
 +</​code>​
  
-//<PATH TO NAMD INSTALLATION>/namd2 <input configuration file>//+Here, the ''​<version_string>''​ may deviate between the clusters. If you are missing a particular version, please inform us.
  
-This starts one threaded simulation direkt on the login host. Input configuration file describes simulationComplete description of the input file format is available in the [[http://​www.ks.uiuc.edu/​Research/​namd/2.9/ug/node9.html|NAMD ​Documentation]].+<WRAP center round info 90%> 
 +Both, VMD and NAMD, come with a license, which prohibits us to //just install//We, therefore, need individual users to print and sign the linked licenses ([[http://​www.ks.uiuc.edu/​Research/​vmd/current/LICENSE.html|VMD license]] and [[http://www.ks.uiuc.edu/​Research/​namd/​license.html|NAMD ​license]]) on that paper. Subsequently,​ send it via internal mail ("​Hauspost"​) to "ZDV HPC group"​. We will then set the permissions accordingly.
  
-=== Threaded jobs ===+When doing so, please include: 
 +  * your name 
 +  * your username 
 +  * your email address (in case we need to approach you) 
 +</​WRAP>​
  
-Using threaded start allows to achieve performance increasing:+===== Using VMD =====
  
-//bsub -R "​affinity[core(<​number of cores>)]" ​<PATH TO NAMD INSTALLATION>​/namd2 +p<​number of threads<input configuration file>//+<WRAP center round important 90%> 
 +We generally do not provide support in using a particular software with all its flags (or in the case of a GUI, all its clicks). In this particular case we refer to the [[http://www.ks.uiuc.edu/​Training/​Tutorials/​vmd/​tutorial-html|VMD tutorial]].   
 +</WRAP>
  
-where <number of threads> is normally equal to <number of cores>. +That being written, we are aware that starting the VMD-GUI can be tricky and the relevant information hard to find.
-For example+
  
-//bsub -R "​affinity[core(16)]" <PATH TO NAMD INSTALLATION>/​namd2 +p 16 <input configuration file>//+==== Starting the graphical user interface ​(GUI====
  
-starts NAMD simulation with 16 threads using 16 cores of a node.+Here, we give some brief snippets, covering the essentials:
  
-Mogon cluster contains different nodes with up to 64 cores.+  * First, we need to load the module and instruct VMD to start in its graphical, non-MPI mode:
  
-On machines with more than 32 cores it may be necessary to add a communication thread and run on one fewer core than the machine has.  For example on a 64-core machine ​this would be run as+<code bash> 
 +$ module load vis/VMD # this will load the most current version, which is installed 
 +$ export VMDNOMPI=1 
 +</​code>​
  
-//​bsub ​-R "​affinity[core(64)]" <PATH TO NAMD INSTALLATION>/​namd2 +p 63 +commthread <input configuration file>//+  * Next, we start an interactive session, e.g. with ''​salloc''​. Here is a little more information on [[slurm_submit#​interactive_jobs|interactive jobs]]. With respect to the number of cpus: Not everything in VMD is parallelized. If you want to start an VMD-MPI job, you better write a VMD script and start VMD [[slurm_submit#​simple_mpi_job|like a conventional MPI-application]]
  
-=== Memory Usage ===+<code bash> 
 +$ salloc -A <​your_account>​ -p short -c <​number_of_cpus>​ -t <​sufficient_time>​ 
 +<​snipping wainting for the session>​ 
 +$ srun vmd <​arguments>​ 
 +</​code>​
  
-NAMD has traditionally used less than 100MB of memory even for systems +  * When ending ​the sessiondo not forget to type ''​exit''​ to relinquish ​the allocation.
-of 100,000 atoms. ​ With the reintroduction of pairlists in NAMD 2.5however, memory usage for a 100,000 atom system with a 12A cutoff can approach 300MB, and will grow with the cube of the cutoff This extra memory is distributed across processors during a parallel run, but a single workstation may run out of physical memory with a large system. +
-Appropriate method for memory reservation is using of application profile, for example: +
-start 1 NAMD process with 64 calculation threads in 15G memory pool+
  
-//bsub -app Reserve15G -R "​affinity[core(64)]"​ <PATH TO NAMD INSTALLATION>/​namd2 +p 64 <input configuration file>//+==== Setting up a simulation files for NAMD ====
  
-or with reservation 5000000 Kb of memory for this 1 process+<WRAP center round important 90%> 
 +Molecular Dynamics (MD) simulations can be tricky to be set up correctly. It is not unusual to make mistakes, which may consume lots of CPU time. The purpose of these snippets is not to teach running MD simulations in all detail, but to descript how to generate the most basic setup. 
 +</​WRAP>​
  
-//bsub -M 5000000 ​-R "​affinity[core(64)]"​ <PATH TO NAMD INSTALLATION>​/namd2 +p 64 <input configuration ​file>//+The following section refers to [[http://www.ks.uiuc.edu/​Training/​Tutorials/​namd/​namd-tutorial-unix-html/node6.html|the VMD tutorial section on generating a PSF file]]:
  
-=== non threaded MPI jobs ===+  - load a new molecule 
 +  - run the autopsf utility (Extensions -> Modeling -> Automatic PSF Builder)
  
-//bsub -n <number of processes-M 1000000 -R "​affinity[core(1)]"​ mpirun ​<PATH TO NAMD INSTALLATION>/​namd2 ​<input configuration file>//+This will create the following files: ''​<molecule_prefix>_autopsf_formatted.pdb'',​ ''​molecule_prefix>​_autopsf.log'',​ ''​<molecule_prefix>_autopsf.pdb'',​ ''​<molecule_prefix>_autopsf.psf''​.
  
-=== Using of affinity and core mapping ===+Of course, VMD offers many more options, most are more sophisticated than this description.
  
-There are two possibility to set process to core mapping.+===== Using NAMD =====
  
-//bsub -<number of processes> -M 1000000 ​-R "​affinity[core(1)*2]"​ mpirun ​<PATH TO NAMD INSTALLATION>/namd2 <input configuration file>//+With a given [[http://​www.ks.uiuc.edu/​Training/​Tutorials/namd/namd-tutorial-unix-html/​node26.html|configuration file]], you **can** start NAMD like this: 
 + 
 +<code bash> 
 +#​!/​bin/​bash 
 + 
 +#​SBATCH ​-t <​time>​ 
 +#​SBATCH ​-<partition# e.g. nodeshort ​parallel 
 +#SBATCH -A <account> 
 +#SBATCH -N <rather start with a low number to test> 
 +#SBATCH -n <N * cores node> 
 + 
 +module purge 
 + 
 +module load chem/NAMD 
 + 
 +# the suffix '​namd_conf'​ is arbitrary 
 +srun namd2  <​prefix>​.namd_conf 
 +</​code>​ 
 + 
 +The configuration file content depends on the simulation to be carried out, however, the ''​coordinates''​ and ''​structure''​ parameters could simply refer to the autogenerated ''​.pdf''​ and ''​.psf''​ files produced by VMD as described, respectively.
  • namd.1403009945.txt.gz
  • Last modified: 2014/06/17 14:59
  • by noskov