User Tools

Site Tools


md_software

Molecular Dynamics Simulation Sofware

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator.

LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.

For further information, please visit LAMMPS website.

LAMMPS

  • Module:
  • intel/composer/2013_4_183
  • fftw/3.3.4/intel/composer2013-4.183/interlagos
  • mpi/intelmpi/5.1.0.079
  • software/lammps/7Dec15
  • Description: LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.

For further information, please visit LAMMPS website.

  • Binaries: lmp_mpi and lmp_serial plus those in the module's tools directory.
  • URL(s): lammps
  • Docs:
  • Example call(s): To run lammps, you need to provide an input script for example “in.script” and also a a data file that your input script needs to read to start the simulation. In the following example, “16” is the number of cores requested.
  mpirun  -n 16  -envall lmp_mpi < in.script
  • LSF considerations (e.g. bsub example): an example of a script for job submission
#!/bin/bash''
#BSUB -J lammps-test      #jobname 
#BSUB -o test.log 
#BSUB -e  test.err 
#BSUB -q nodeshort 
#BSUB -W 300      #runtime in min 
#BSUB -n  64 
mpirun  -n 64  -envall lmp_mpi <in.fene1000x10 

you can submit the job as bsub < lammps_job.sh

  • Please load all the required modules, before starting to run lammps.
  • contact: sjabbari (Institut für Physik)

HOOMD

HOOMD-blue is a general-purpose particle simulation toolkit. It scales from a single CPU core to thousands of GPUs. On a single NVIDIA GPU, HOOMD-blue performs an order of magnitude faster than a multi-core CPU in typical benchmarks.

  • Modules:
  • gcc/4.6.2 cuda/gcc-4.6.2/6.5.14
  • mpi/openmpi/1.8.7/gcc_4.4.7-cuda_6.5.14
    • software/hoomd/1.0.5/gcc-4.6.2_cuda-6.5.14/nompi
    • software/hoomd/1.0.5/gcc-4.6.2_cuda-6.5.14/openmpi-1.8.7
    • software/hoomd/1.3.3/gcc-4.6.2_cuda-6.5.14/nompi
  • Description: HOOMD-blue is a general-purpose particle simulation toolkit. It scales from a single CPU core to thousands of GPUs.

You define particle initial conditions and interactions in a high-level python script. Then tell HOOMD-blue how you want to execute the job and it takes care of the rest. Python job scripts give you unlimited flexibility to create custom initialization routines, control simulation parameters, and perform in situ analysis.

  • Binaries: hoomd and hoomd-config.sh plus those in the module's tools directory.
  • URL(s): HOOMD
  • Example call(s): (could be specified in a shell script, say, 'submit_hoomd.sh')
"echo  #!/bin/bash
#BSUB -e $HOME/my_working_directory/err.log
#BSUB -o $HOME/my_working_directory/out.log
#BSUB -W 300/g0001
#BSUB -q gpushort
#BSUB -n 1
#BSUB -R 'rusage[cuda=1]'
#BSUB -app Reserve1800M
#BSUB -J hoomd${n}
 
timeout -s 2 290m hoomd ../myscript.py 
bash submit_hoomd.sh | bsub"

One can submit a job as: bash submit_hoomd.sh | bsub

  • LSF considerations (e.g. bsub example for submitting a job directly [without a bash script] from the command line): bsub -e err.log -o out.log -W 300/g0001 -R 'rusage[cuda=1]' -app Reserve1800M -q gpushort -n 1 hoomd myscript.py –mode=gpu submits a job specified in 'myscript.py' to a single node in the queue gpushort (one could use here the other available GPU queues as well) whereby this job is supposed to run for 300 minutes and requires 1800M operational memory
  • In the working directory one should have a data file [written in xml-format], which specifies particle positions, velocities, interactions, etc., that the input script, say, 'myscrip.py', needs to read so as to start the simulation.
  • contact: milchev (Institut für Physik)
md_software.txt · Last modified: 2016/03/21 18:04 by milchev