software:namd2

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
software:namd2 [2021/09/14 15:07]
jrutte02 removed
— (current)
Line 1: Line 1:
-====== NAMD ====== 
-[[http://www.ks.uiuc.edu/Research/namd/]]\\ 
  
-NAMD (**N**ot (just) **A**nother **M**olecular **D**ynamics program) is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. 
-Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 200,000 cores for the largest simulations.\\ 
- 
-^  NAMD  ^^ 
-|**Version:**|2.9| 
-|**Release:**|April 2012| 
-|**Lizenz:**|[[http://en.wikipedia.org/wiki/University_of_Illinois_license|University of Illinois license]]| 
-|**Developer**:|Theoretical and Computational Biophysics Group (TCB) and the Parallel Programming Laboratory (PPL)| 
- 
- 
-===== Available Versions and Combinations ===== 
- 
-^ Version ^ Compiler                   ^ MPI                                            ^ Path                                        ^ Environment^ 
-| 2.9   | [[software:gcc|GCC]] 4.7.0 | [[software:intelmpi|intelmpi]] 4.0.3.008 |''/cluster/Apps/NAMD/gcc_4.7.0/impi_4.0.3.008/'' | | 
-| :::   | :::                        | [[software:openmpi|OpenMPI]] 1.6.1       |''/cluster/Apps/NAMD/gcc_4.7.0/openmpi_1.6.1/''  | | 
-| :::   | :::                        |                                          |''/cluster/Apps/NAMD/gcc_4.7.0/ibverbs/''  | | 
-| :::   | [[software:icc|Intel Composer]] 2011 SP1 10.319 | [[software:intelmpi|intelmpi]] 4.0.3.008 |''/cluster/Apps/NAMD/intel-studio-2011-sp1-10/impi_4.0.3.008/''| I_MPI_FABRICS=shm:ofa((This version of NAMD will be faster, if you set the Intel-MPI runtime environment variable I_MPI_FABRICS to shm:ofa. Just type ''export I_MPI_FABRICS=shm:ofa'' in your console and then run Namd)) | 
-===== Usage ===== 
-To use NAMD, you first need to load the corresponding modulefiles.\\ 
- 
-For single-cpu usage:  
-<code bash>$ $PATH/namd2 <configfile></code> 
- 
-For parallel usage with LSF: 
-<code bash>$ bsub -a <mpi> -q <queuename> -o <outputfile> -e <errorfile> -n <amount of cores> -app Reserve1900M -R 'span[ptile=32]' mpirun $PATH/namd2 <configfile></code> 
- 
-===== Notes ===== 
-For computations with up to around 256 cores the version ''gcc_4.7.0/impi_4.0.3.008/'' will be faster than ''intel-studio-2011-sp1-10/impi_4.0.3.008/''. 
-With a higher number of cores it will be reversed.