namd

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
namd [2017/09/18 11:38]
meesters
namd [2018/08/18 06:12] (current)
meesters
Line 1: Line 1:
-===== High-performance simulation of large bimolecular systems - NAMD =====+====== Using VMD and NAMD on Mogon ======
  
-<WRAP center round todo 90%> +===== Licensing Issues ===== 
-''​This page is outdated. We hope to provide more informationsoon.''​+ 
 +We, the ZDV HPC group, will setup and compile [[http://​www.ks.uiuc.edu/​Research/​vmd/​|VMD]] and [[http://​www.ks.uiuc.edu/​Research/​namd/​|NAMD]] for you -- [[https://​hpc.uni-mainz.de/​high-performance-computing/​service-angebot/​softwareinstallation/​|upon request]]. You will find the installed modules as 
 + 
 +<code bash> 
 +vis/​VMD/<​version_string>​ 
 +chem/​NAMD/<​version_string>​ 
 +</​code>​ 
 + 
 +Here, the ''<​version_string>''​ may deviate between the clusters. If you are missing a particular version, please inform us. 
 + 
 +<WRAP center round info 90%> 
 +Both, VMD and NAMD, come with a license, which prohibits us to //just install//. We, therefore, need individual users to print and sign the linked licenses ([[http://​www.ks.uiuc.edu/​Research/​vmd/​current/​LICENSE.html|VMD license]] and [[http://​www.ks.uiuc.edu/​Research/​namd/​license.html|NAMD license]]) on that paper. Subsequentlysend it via internal mail ("​Hauspost"​) to "ZDV HPC group"We will then set the permissions accordingly. 
 + 
 +When doing so, please include: 
 +  * your name 
 +  * your username 
 +  * your email address (in case we need to approach you)
 </​WRAP>​ </​WRAP>​
  
-NAMD, recipient of a 2002 Gordon Bell Award, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD is file-compatible with AMBER, CHARMM, and X-PLOR and is distributed free of charge with source code. +===== Using VMD =====
-Documentation and source code on the web page of "​[[http://​www.ks.uiuc.edu/​Research/​namd/​|Theoretical and computational biophysics group]]"​+
  
-There are few possibilities ​to start NAMD simulation on the MOGON cluster.+<WRAP center round important 90%> 
 +We generally do not provide support in using a particular software with all its flags (or in the case of a GUI, all its clicks). In this particular case we refer to the [[http://​www.ks.uiuc.edu/​Training/​Tutorials/​vmd/​tutorial-html|VMD tutorial]]  
 +</​WRAP>​
  
-//<PATH TO NAMD INSTALLATION>/​namd2 <input configuration file>//+That being written, we are aware that starting the VMD-GUI can be tricky and the relevant information hard to find.
  
-This starts one threaded simulation direkt on the login host. Input configuration file describes simulation. Complete description of the input file format is available in the [[http://​www.ks.uiuc.edu/​Research/​namd/​2.9/​ug/​node9.html|NAMD Documentation]].+==== Starting ​the graphical user interface (GUI) ====
  
-=== Threaded jobs ===+Here, we give some brief snippets, covering the essentials:
  
-Using threaded start allows ​to achieve performance increasing:+  * First, we need to load the module and instruct VMD to start in its graphical, non-MPI mode:
  
-//bsub -R "​affinity[core(<number of cores>)]" ​<PATH TO NAMD INSTALLATION>​/namd2 +p<​number of threads<input configuration file>//+<code bash> 
 +$ module load vis/VMD # this will load the most current version, which is installed 
 +$ export VMDNOMPI=1 
 +</code>
  
-where <number of threads> ​is normally equal to <number of cores>. +  * Next, we start an interactive session, e.g. with ''​salloc''​. Here is a little more information on [[slurm_submit#​interactive_jobs|interactive jobs]]. With respect ​to the number of cpus: Not everything in VMD is parallelized. If you want to start an VMD-MPI job, you better write a VMD script and start VMD [[slurm_submit#​simple_mpi_job|like a conventional MPI-application]]
-For example+
  
-//​bsub ​-R "​affinity[core(16)]" ​<PATH TO NAMD INSTALLATION>/namd2 +16 <input configuration file>//+<code bash> 
 +$ salloc ​-<your_account-short -c <number_of_cpus-t <​sufficient_time>​ 
 +<​snipping wainting for the session>​ 
 +$ srun vmd <​arguments>​ 
 +</code>
  
-starts NAMD simulation with 16 threads using 16 cores of a node.+  * When ending the session, do not forget to type ''​exit''​ to relinquish the allocation.
  
-Mogon cluster contains different nodes with up to 64 cores.+==== Setting ​up a simulation files for NAMD ====
  
-On machines with more than 32 cores it may be necessary ​to add a communication thread and run on one fewer core than the machine has For example on a 64-core machine this would be run as+<WRAP center round important 90%> 
 +Molecular Dynamics (MD) simulations can be tricky ​to be set up correctly. It is not unusual to make mistakes, which may consume lots of CPU time. The purpose of these snippets is not to teach running MD simulations in all detail, but to descript how to generate ​the most basic setup. 
 +</​WRAP>​
  
-//bsub -R "​affinity[core(64)]"​ <PATH TO NAMD INSTALLATION>​/namd2 +p 63 +commthread <input configuration file>//+The following section refers to [[http://www.ks.uiuc.edu/Training/Tutorials/namd/​namd-tutorial-unix-html/​node6.html|the VMD tutorial section on generating a PSF file]]:
  
-=== Memory Usage ===+  - load a new molecule 
 +  - run the autopsf utility (Extensions -> Modeling -> Automatic PSF Builder)
  
-NAMD has traditionally used less than 100MB of memory even for systems +This will create ​the following files: ''<​molecule_prefix>​_autopsf_formatted.pdb''​''​molecule_prefix>​_autopsf.log''​''<​molecule_prefix>​_autopsf.pdb''​''<​molecule_prefix>​_autopsf.psf''​.
-of 100,000 atoms. ​ With the reintroduction of pairlists in NAMD 2.5howevermemory usage for a 100,000 atom system with a 12A cutoff can approach 300MB, and will grow with the cube of the cutoff This extra memory is distributed across processors during a parallel runbut a single workstation may run out of physical memory with a large system. +
-Appropriate method for memory reservation is using of application profile, for example: +
-start 1 NAMD process with 64 calculation threads in 15G memory pool+
  
-//bsub -app Reserve15G -R "​affinity[core(64)]"​ <PATH TO NAMD INSTALLATION>/​namd2 +p 64 <input configuration file>//+Of course, VMD offers many more options, most are more sophisticated than this description.
  
-or with reservation 5000000 Kb of memory for this 1 process+===== Using NAMD =====
  
-//bsub -M 5000000 ​-R "​affinity[core(64)]"​ <PATH TO NAMD INSTALLATION>​/namd2 +p 64 <​input ​configuration file>//+With a given [[http://www.ks.uiuc.edu/​Training/​Tutorials/​namd/​namd-tutorial-unix-html/node26.html|configuration file]], you **can** start NAMD like this:
  
-=== non threaded MPI jobs ===+<code bash> 
 +#!/bin/bash
  
-//​bsub ​-<number of processes> -M 1000000 mpirun ​<PATH TO NAMD INSTALLATION>/namd2 <input configuration file>//+#​SBATCH ​-<time> 
 +#​SBATCH ​-<partition# e.g. nodeshort ​parallel 
 +#SBATCH -A <account> 
 +#SBATCH -N <rather start with a low number to test> 
 +#SBATCH -n <N * cores node>
  
-=== Using of affinity and core mapping ===+module purge
  
-There are two possibility to set process to core mapping.+module load chem/NAMD
  
-//bsub -n <number of processes-M 1000000 -R "​affinity[core(1)*2]"​ mpirun ​<PATH TO NAMD INSTALLATION>​/namd2 <input configuration file>//+# the suffix '​namd_conf'​ is arbitrary 
 +srun namd2  ​<prefix>.namd_conf 
 +</code>
  
-...under construction ...+The configuration file content depends on the simulation to be carried out, however, the ''​coordinates''​ and ''​structure''​ parameters could simply refer to the autogenerated ''​.pdf''​ and ''​.psf''​ files produced by VMD as described, respectively.
  • namd.1505727492.txt.gz
  • Last modified: 2017/09/18 11:38
  • by meesters