GT Home : : Campus Maps : : GT Directory

Author Archive

New and Updated Software: Portland Group Compiler and ANSYS

Posted by on Tuesday, 29 January, 2013

Two new sets of software have been installed on PACE-managed systems – PGI 12.10 and ANSYS 14.5 service pack 1.

PGI 12.10

The Portland Group, Inc. (a.k.a. PGI) makes software compilers and tools for parallel computing. The Portland Group offers optimizing parallel FORTRAN 2003, C99 and C++ compilers and tools for workstations, servers and clusters running Linux, MacOS or Windows operating systems based on the following microprocessors:

This version of the compiler supports the OpenACC GPU programming directives.
More information can be found at The Portland Group website.
Information about using this compiler with the OpenACC directives can be found at PGI Insider and OpenACC.

Usage Example

$ module load pgi/12.10
$ pgfortran example.f90
$ ./a.out
Hello World

ANSYS 14.5 Service Pack 1

ANSYS develops, markets and supports engineering simulation software used to foresee how product designs will behave and how manufacturing processes will operate in real-world environments.

Usage Example

$ module load ansys/14.5
$ ansys145

Collapsing nvidiagpu and nvidia-gpu queues

Posted by on Wednesday, 16 January, 2013

PACE has several nodes with NVidia GPUs installed.
There are currently two queues (nvidiagpu and nvidia-gpu) that have GPU nodes assigned to them.
It is confusing to have two queues with the same purpose and slightly different names, so PACE will be collapsing both queues into the “nvidia-gpu” queue.
That means that the nvidiagpu queue will disappear, and the nvidia-gpu queue will have all of the resources contained by both queues.

Please send any questions or concerns to pace-support@oit.gatech.edu

Symposium: Integrating Computational Science into your Undergraduate Curriculum

Posted by on Monday, 14 January, 2013

Clemson University (Clemson, SC) is hosting a symposium on February 11, 12, and 13.
The topic is “Integrating Computational Science into your Undergraduate Curriculum”
The workshop, symposium and training are open at no charge to all interested faculty and students who register to attend.
Financial assistance for primarily undergraduate faculty is available to cover travel costs.

See the Symposium website for the agenda and registration information.

New Software: VASP 5.3.2

Posted by on Wednesday, 12 December, 2012

VASP 5.3.2 – Normal, Gamma, and Non-Collinear versions

Version 5.3.2 of VASP has been installed.
The newly installed versions have been checked against our existing tests; the expected results agree to within some small error.
Please check this new version against your known correct results!

Using it

#First, load the required compiler 
$ module load intel/12.1.4
#Load all the necessary support modules
$ module load mvapich2/1.6 mkl/10.3 fftw/3.3
#Load the vasp module
$ module load vasp/5.3.2
#Run vasp $ mpirun vasp #Run the gamma-only version of vasp $ mpirun vasp_gamma #Run the noncollinear version of vasp $ mpirun vasp_noncollinear

Compilation Notes

  • Only the Intel compiler generated MPI-enabled vasp binaries that correctly executed the test suite.
  • The “vasp” binary was compiled with these preprocessor flags: -DMPI -DHOST=\"LinuxIFC\" -DIFC -DCACHE_SIZE=12000 -DMINLOOP=1 -DPGF90 -Davoidalloc -DNGZhalf -DMPI_BLOCK=8000
  • The “vasp_gamma” binary was compiled with these preprocessor flags: -DMPI -DHOST=\"LinuxIFC\" -DIFC -DCACHE_SIZE=12000 -DMINLOOP=1 -DPGF90 -Davoidalloc -DNGZhalf -DwNGZhalf -DMPI_BLOCK=8000
  • The “vasp_noncollinear” binary was compiled with these preprocessor flags: -DMPI -DHOST=\"LinuxIFC\" -DIFC -DCACHE_SIZE=12000 -DMINLOOP=1 -DPGF90 -Davoidalloc -DMPI_BLOCK=8000

New and Updated Software: BLAST, COMSOL, Mathematica, VASP

Posted by on Friday, 7 December, 2012

All of the software detailed below is available through the “modules” system installed on all PACE-managed Redhat Enterprise 6 computers.
For basic usage instructions on PACE systems see the Using Software Modules page.

NCBI BLAST 2.2.25 – Added multithreading in new GCC 4.6.2 version

The 2.2.25 version of BLAST that was compiled with GCC 4.4.5 has multithreading (i.e. multi-CPU execution) disabled.
A new version of BLAST with multithreading enabled has been compiled with the GCC 4.6.2 compiler.

Using it

#First, load the required compiler 
$ module load gcc/4.6.2
#Now load BLAST
$ module load ncbi_blast/2.2.25
#Setup the environment so that blast can find the database
$ export BLASTDB=/path/to/db
#Run a nucleotide-nucleotide search
$ blastn -query /path/to/query/file -db <db_name> -num_threads <number of CPUS allocated to job>

COMSOL 4.3a – Student and Research versions

COMSOL Multiphysics version 4.3a contains many new functions and additions to the COMSOL product suite. These Release Notes provide information regarding new functionality in existing products and an overview of new products.
See the COMSOL Release Notes for information on updates to this version of COMSOL.

Using it

#Load the research version of comsol 
$ module load comsol/4.3a-research
$ comsol ...
#Use the matlab livelink
$ module load matlab/r2011b
$ comsol -mlroot ${MATLAB}

Mathematica 9.0

Mathematica 9 is a major update to the Mathematica software.

Using it

$ module load mathematica/9.0 
$ mathematica

VASP 5.2.12

The pre-calculated kernel for the vdW-DF functional has been installed into the same directory as the vasp binary.
This precalculated kernel is contained in the file “vdw_kernel.bindat”

Using it

#First, load the vasp module (and all the prerequisites) 
$ module load intel/12.1.4 mvapich2/1.6 mkl/10.2 fftw/3.3 vasp/5.2.12
#Copy the kernel to where vasp expects (normally the working directory)
$ cp ${VDW_KERNEL} .
# Run vasp
$ mpirun vasp

New and Updated Software: Java, MUMPS, SCOTCH, ParMETIS, OpenFOAM, trf, CUDA, lagan, MPJ Express, R, Wireshark, Sharktools

Posted by on Thursday, 6 December, 2012

We have lots of updated software this time.
I’ve been putting off an update for other reasons, and now we have a lot to cover.
Remember that all of this software is available through the “modules” system installed on all PACE-managed Redhat Enterprise 6 computers.
For basic usage instructions on PACE systems see the Using Software Modules page.

Java 7

Here is a brief summary of the enhancements included with the Java 7 release:

  • Improved performance, stability and security.
  • Enhancements in the Java Plug-in for Rich Internet Applications development and deployment.
  • Java Programming language enhancements that enable developers with ease of writing and optimizing the Java code.
  • Enhancements in the Java Virtual machine to support Non-Java languages.

There are a large number of enhancements in JDK 7.
See the JDK 7 website for more information.

Using it

$ module avail java 
java/1.7.0

$ module load java/1.7.0
#Checking that you are using the right version
$ which java
/usr/local/packages/java/1.7.0/bin/java
$ which javac
/usr/local/packages/java/1.7.0/bin/javac

Note: The java/1.7.0 module adds “.” to the CLASSPATH environment variable.
If you don’t know what that means, see the wikipedia page.

Scotch and PT-Scotch 5.1.12

Scotch is a software package and set of libraries for sequential and parallel graph partitioning, static mapping and clustering, sequential mesh and hypergraph partitioning, and sequential and parallel sparse matrix block ordering.

Using it

#First load a compiler - almost any compiler will work: 
$ module load gcc/4.6.2
#Load an MPI distribution - any of them should work:
$ module load openmpi/1.4.3
#Compile an application using the ptscotch library:
$ mpicc mpi_application.c ${LDFLAGS} -lptscotch

ParMETIS 3.2.0 and 4.0.2

ParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes developed in our lab.

ParMETIS provides the following five major functions:

  • Graph Partitioning
  • Mesh Partitioning
  • Graph Repartitioning
  • Partitioning Refinement
  • Matrix Reordering

Using it

#First load a compiler - almost any compiler will work: 
$ module load intel/12.1.4
#Load an MPI distribution - any of them should work:
$ module load mvapich2/1.6
#Compile an application using the parmetis library:
$ mpicc mpi_application.c ${LDFLAGS} -lparmetis -lmetis

MUMPS 4.10.0

MUMPS is a (MU)ltifrontal (M)assively (P)arallel sparse direct (S)olver.
Main Features:

  • Solution of large linear systems with symmetric positive definite matrices; general symmetric matrices; general unsymmetric matrices;
  • Version for complex arithmetic;
  • Parallel factorization and solve phases (uniprocessor version also available);
  • Iterative refinement and backward error analysis;
  • Various matrix input formats assembled format; distributed assembled format; elemental format;
  • Partial factorization and Schur complement matrix (centralized or 2D block-cyclic);
  • Interfaces to MUMPS: Fortran, C, Matlab and Scilab;
  • Several orderings interfaced: AMD, AMF, PORD, METIS, PARMETIS, SCOTCH, PT-SCOTCH.

Using it

#First load a compiler - almost any compiler will work: 
$ module load gcc/4.6.2
#Load an MPI distribution - any of them should work:
$ module load openmpi/1.4.3
# Load the rest of the prerequisites (other solvers and libraries)
$ module load mkl/10.3 scotch/5.1.12 parmetis/3.2.0
#Compile your application and link against the correct mumps library:
$ mpicc mpi_application.c ${LDFLAGS} -lcmumps

OpenFOAM 2.1.x

OpenFOAM is a free, open source CFD software package developed by OpenCFD Ltd at ESI Group and distributed by the OpenFOAM Foundation . It has a large user base across most areas of engineering and science, from both commercial and academic organisations. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.

Using it

#Unload any compiler and MPI modules you may have loaded: 
$ module list
pgi/12.3 openmpi/1.5.4 acml/5.2.0 #pgi/12.3 and openmpi/1.5.4 are just examples.
$ module rm openmpi/1.5.4 pgi/12.3
# Load the openfoam module
$ module load openfoam/2.1.x
ERROR: The directory ~/scratch/OpenFOAM/2.1.x must exist
OpenFOAM module not loading
execute "mkdir -p ~/scratch/OpenFOAM/2.1.x" to create this directory
#Oops - the openfoam module requires that we have a particular directory for openfoam to work with.
$ mkdir -p ~/scratch/OpenFOAM/2.1.x
#Now load the openfoam module again
$ module load openfoam/2.1.x
#Test that openfoam is OK
$ foamInstallationTest
#If this command succeeded, everything is OK.
#Testing openfoam
$ cd ~/scratch/OpenFOAM/2.1.x
$ cp -r ${FOAM_TUTORIALS}/tutorials/basic .
$ cd basic/laplacianFoam/flange/
$ ./Allclean
$ ./Allrun
ansysToFoam: converting mesh flange.ans
Running laplacianFoam on ~/scratch/OpenFOAM/2.1.x/basic/laplacianFoam/flange
Running foamToFieldview9 on ~/scratch/OpenFOAM/2.1.x/basic/laplacianFoam/flange
Running foamToEnsight on ~/scratch/OpenFOAM/2.1.x/basic/laplacianFoam/flange
Running foamToVTK on ~/scratch/OpenFOAM/2.1.x/basic/laplacianFoam/flange

trf (Tandem Repeats Finder)

A tandem repeat in DNA is two or more adjacent, approximate copies of a pattern of nucleotides. Tandem Repeats Finder is a program to locate and display tandem repeats in DNA sequences.

Using it

$ module load trf/4.07b 
$ trf

CUDA 5.0.35

CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

Using it

$ module load cuda/5.0.35 
#Use nvcc to compile a CUDA application
$ nvcc application.cpp

LAGAN/h2>

LAGAN toolkit is a set of tools for local, global, and multiple alignment of DNA sequences.

Using it

#Load a compiler module 
$ module load gcc/4.7.2
#Load the lagan module
$ module load lagan/2.0
$ lagan.pl

MPJ Express

MPJ Express is an open source Java message passing library that allows application developers to write and execute parallel applications for multicore processors and compute clusters/clouds.

Using it

#MPJ needs to store log files and cannot do so in the system-install location. 
#We need to create a place for it to put log data.
$ mkdir ~/mpj/logs
$ module load mpj/0.38
#Inside a job script:
$ mpjboot machinefile
$ mpjrun.sh ... application.jar
$ mpjhalt machinefile

R 2.15.2

R is a free software environment for statistical computing and graphics.

Using it

$ module load R/2.15.2 
$ R

Wireshark 1.4.15, 1.6.12, 1.8.4

Wireshark is the world’s foremost network protocol analyzer. It lets you capture and interactively browse the traffic running on a computer network. It is the de facto (and often de jure) standard across many industries and educational institutions.

Using it

$ module load wireshark/1.8.4 
$ wireshark

Sharktools

Sharktools is a Matlab and Python frontend to wireshark.

Using it

#Load the necessary prerequisites 
$ module load wireshark/1.4.15 matlab/r2011b python/2.7.2
#Load sharktools
$ module load sharktools/0.15
# python
>>> import pyshark
...

VASP Calculation Errors

Posted by on Monday, 29 October, 2012

UPDATE: The VASP binaries that generate incorrect results have been DELETED.

One of the versions of VASP installed on all RHEL6 clusters can generate incorrect answers.
The DFT energies calculated are correct, but the forces may not be correct.

The affected vasp binaries are located here:
/usr/local/packages/vasp/5.2.12/mvapich2-1.6/intel-12.0.0.084/bin/vasp
/usr/local/packages/vasp/5.2.12/mvapich2-1.7/intel-12.0.0.084/bin/vasp
/usr/local/packages/vasp/5.2.12/openmpi-1.4.3/intel-12.0.0.084/bin/vasp
/usr/local/packages/vasp/5.2.12/openmpi-1.5.4/intel-12.0.0.084/bin/vasp

All affected binaries were compiled with the intel/12.0.0.084 compiler.

Solution:
Use a different vasp binary – versions compiled with the intel/10.1.018 and intel/11.1.059 compilers have been checked for correctness.
Neither of those compilers generate incorrect answers on the test cases that discovered the error.

Here is an excerpt from a job script that uses a correct vasp binary:

###########################################################

#PBS -q force-6
#PBS -l walltime=8:00:00

cd $PBS_O_WORKDIR

module load intel/11.1.059 mvapich2/1.6 vasp/5.2.12
which vasp
#This “which vasp” command should print this:
#/usr/local/packages/vasp/5.2.12/mvapich2-1.6/intel-11.1.059/bin/vasp
#If it prints anything other than this, the modules loaded are not as expected, and you are not using the correct vasp.

mpirun -rmk pbs vasp
##########################################################

We now have a test case with known correct results that will be checked every time a new vasp binary is installed.
This step will prevent this particular error from occurring again.
Unless there are strenuous objections, this version of vasp will be deleted from the module that loads it (today) and the binaries will be removed from /usr/local/packages/ (in one week).

Thank you Ambarish for reporting this issue.

Let us know if you have any questions, concerns, or comments.

Maintenance Day (October 16, 2012)

Posted by on Tuesday, 16 October, 2012

PACE Maintenance Day is underway.
All compute nodes are off, and all login nodes should be inaccessible.

New and Updated Software: GCC, Maxima, OpenCV, Boost, ncbi_blast

Posted by on Tuesday, 25 September, 2012

Software Installation and Updates

We have had several requests for new or updated software since the last post on August 14.
Here are the details about the updates.
All of this software is installed on RHEL6 clusters (including force-6, uranus-6, ece, math, apurimac, joe-6, etc.)

GCC 4.7.2

The GNU Compiler Collection (GCC) includes compilers for many languages (C, C++, Fortran, Java, and Go).
This latest version of GCC supports advanced optimizations for the latest compute nodes in PACE.

Here is how to use it:

$ module load gcc/4.7.2
$ gcc <source.c>
$ gfortran <source.f>
$ g++ <source.cpp>

Versions of GCC already installed on RHEL6 cluster are gcc/4.4.5, gcc/4.6.2, and gcc/4.7.0

Maxima 5.28.0

Maxima is a system for the manipulation of symbolic and numerical expressions, including differentiation, integration, Taylor series, Laplace transforms, ordinary differential equations, systems of linear equations, polynomials, and sets, lists, vectors, matrices, and tensors. Maxima yields high precision numeric results by using exact fractions, arbitrary precision integers, and variable precision floating point numbers. Maxima can plot functions and data in two and three dimensions.

Here is how to use it:

$ module load clisp/2.49.0 maxima/5.28.0
$ maxima
#If you have X-Forwarding turned on, "xmaxima" will display a GUI with a tutorial
$ xmaxima

OpenCV 2.4.2

OpenCV (Open Source Computer Vision) is a library of programming functions for real time computer vision.

OpenCV is released under a BSD license, it is free for both academic and commercial use. It has C++, C, Python and soon Java interfaces running on Windows, Linux, Android and Mac. The library has more than 2500 optimized algorithms.

This installation of OpenCV has been installed with support for Python and NumPy. It has been installed without support for Intel TBB, Intel IPP, or CUDA.

Here is how to use it:

$ module load gcc/4.4.5 opencv/2.4.2
$ g++ <source.cpp> $(pkg-config --libs opencv)

Boost

Boost provides free peer-reviewed portable C++ source libraries.
Boost libraries are intended to be widely useful, and usable across a broad spectrum of applications.

Here is how to use it:

$ module load boost/1.51.0
$ g++ <source.cpp>

NCBI BLAST

Basic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences.

Here is how to use it:

$ module load gcc/4.4.5 ncbi_blast/2.2.27
$ blastn
$ blastp
$ blastx
...

New Software: HDF5(1.8.9), OBSGRID (April 2, 2010), ABINIT(6.12.3), VMD(1.9.1), and NAMD(2.9)

Posted by on Tuesday, 14 August, 2012

Several new software packages have been installed on all RHEL6 clusters.

HDF5

HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.
A previous version of HDF5 (1.8.7) has existed on the RHEL6 clusters for many months.
The 1.8.9 version includes many bug fixes and some new utilities.

The hdf5/1.8.9 module is used differently than the 1.8.7 module.
The 1.8.9 module is able to detect whether an MPI module has been previously loaded and will support the proper serial or MPI version of the library.
The 1.8.7 module was not able to automatically detect MPI vs. non-MPI.

Here are two examples of how to use the new HDF5 module (note that all compilers and MPI installations are usable with HDF5):

$ module load hdf5/1.8.9

or

$ module load intel/12.1.4 mvapich2/1.6 hdf5/1.8.9

OBSGRID

OBSGRID is an objective re-analysis package for WRF designed to lower the error of analyses that are used to nudge the model toward the observed state.
The analyses input to OBSGRID as the first guess are analyses output from the METGRID part of the WPS package
Here is how to use obsgrid:

$ module load intel/12.1.4 hdf5/1.8.7/nompi netcdf/4.1.3 ncl/6.1.0-beta obsgrid/04022010
$ obsgrid.exe

ABINIT

ABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave or wavelet basis.
ABINIT 6.8.1 is already installed on the RHEL6 clusters.
There are many changes from 6.8.1 to 6.12.3. See the 6.12.3 release notes for more information.

Here are a few examples of how to use ABINIT in a job script:

#PBS ...
#PBS -l walltime=8:00:00
#PBS -l nodes=64:ib

cd $PBS_O_WORKDIR
module load intel/12.1.4 mvapich2/1.6 hdf5/1.8.9 netcdf/4.2 mkl/10.3 fftw/3.3 abinit/6.12.3
mpirun -rmk pbs abinit < abinit.input.file > abinit.output.file

VMD

VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
VMD has been installed with support for the GCC compilers (versions 4.4.5, 4.6.2, and 4.7.0), NetCDF, Python+NumPy, TCL, and OpenGL.
Here is an example of how to use it:

  1. Login to a RHEL6 login node (joe-6, biocluster-6, atlas-6, etc.) with X-Forwarding enabled (X-Forwarding is critical for VMD to work).
  2. Load the needed modules:
    $ module load gcc/4.6.2 python/2.7.2 hdf5/1.8.7/nompi netcdf/4.1.3 vmd/1.9.1
  3. Execute “vmd” to start the GUI

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.
Version 2.9 of NAMD has been installed with support for GNU and Intel compilers, MPI, FFTW3.
CUDA support in NAMD has been disabled.

Here is an example of how to use it in a job script in a RHEL6 queue (biocluster-6, atlas-6, ece, etc.):

#PBS -N NAMD-test
#PBS -l nodes=32
#PBS -l walltime=8:00:00
...
module load gcc/4.6.2 mvapich2/1.7 fftw/3.3 namd/2.9
cd $PBS_O_WORKDIR

mpirun -rmk pbs namd2 input.file