GT Home : : Campus Maps : : GT Directory

Author Archive

PACE Lecture Series: New Courses Scheduled

Posted by on Thursday, 29 May, 2014

PACE is starting the lecture and training course offering once again.
Over the next two months, we are offering 4 courses:

  1. Introduction to Parallel Programming with MPI and OpenMP (July 22)
  2. Introduction to Parallel Application Debugging and Profiling (June 17)
  3. A Quick Introduction To Python (June 24)
  4. Python For Scientific Computing (July 8)

For details about where and when, or to register your attendance (each class is limited to 30 seats), visit our PACE Training page.

ANSYS version 15 and Matlab R2014a installed

Posted by on Thursday, 24 April, 2014

ANSYS version 15 and Matlab version R2014a have been installed on PACE clusters.
To see examples of how to properly load and use the new versions, execute the following commands and follow the instructions provided.

$ module help ansys/15.0

$ module help matlab/r2014a

If you have any problems executing the examples given by “module help”, please contact

January Maintenance under way

Posted by on Tuesday, 14 January, 2014

The January maintenance period has begun.
All clusters will be inaccessible until maintenance is over.

Power loss in Rich Datacenter

Posted by on Thursday, 19 December, 2013

UPDATE: All clusters are up and ready for service.

At this time, all PACE-managed clusters are believed to be working.
You should be able to login to your clusters and submit and run jobs.

Any jobs that were running before the power outage have failed, so please resubmit them.

Please let us know immediately if anything is still broken.


What happened

At around 0810 Thursday morning, Rich lost its N6 feed, half of the feed powering the Rich building and the Rich chiller plant. This also caused multiple failures in the high voltage vault in the Rich back alley, so Rich also lost its other feed, N5. However, the N5 feed was still up in the chiller plant. Though the chillers still had power, as a precaution operators transferred cooling over to the campus loop. Rich office space was without power, but the machine rooms failed over to the generator and UPSes.

PACE systems were powered down gracefully to prevent a hard-shutdown that would make recovery more difficult.

Original Post

This morning (December 19), the Rich datacenter suffered a power loss.
We had to perform an emergency shutdown of all nodes.

As we receive new information we will update this blog and the pace-availability email list.

COMSOL 4.4 Installed

Posted by on Wednesday, 18 December, 2013

COMSOL 4.4 – Student and Research versions

COMSOL Multiphysics version 4.4 contains many new functions and additions to the COMSOL product suite. These Release Notes provide information regarding new functionality in existing products and an overview of new products.
See the COMSOL Release Notes for information on updates to this version of COMSOL.

Using the research version of COMSOL

#Load the research version of comsol 
$ module load comsol/4.4-research
$ comsol ...
#Use the matlab livelink
$ module load matlab/r2013b
$ comsol -mlroot ${MATLAB}

Using the classroom/student version of COMSOL

#Load the classroom/student version of comsol 
$ module load comsol/4.4
$ comsol ...
#Use the matlab livelink
$ module load matlab/r2013b
$ comsol -mlroot ${MATLAB}

Login Node Storage Server Problems

Posted by on Monday, 1 July, 2013

Last night (2013/06/30), one of the storage servers that is responsible for many of the cluster login nodes encountered some major problems.
These issues are preventing the login nodes from allowing any user to login or use the server.
Following is a list of the affected login nodes:

We are aware of the problem and we are working as quickly as possible to fix this.
Please let us know of any problems you are having that may be related to this.
We will keep you posted about our progress.

Grace 5.1.23 installed

Posted by on Friday, 28 June, 2013


Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif.
Grace is a descendant of ACE/gr, also known as Xmgr.

Example Usage

$ module load grace/5.1.23
$ xmgrace

Intel Cluster Studio 2013 XE Installed

Posted by on Tuesday, 25 June, 2013

The Intel Cluster Studio 2013 XE software suite installation adds several new and useful tools for PACE users.

  • VTune: Intel® VTune™ Amplifier XE 2013 is a serial and parallel performance profiler for C, C++, C#, Fortran, Assembly and Java.
  • Inspector: Intel® Inspector XE is an easy to use memory debugger and thread debugger for serial and parallel applications.
  • Advisor: Intel® Advisor XE is a threading prototyping tool for C, C++, C# and Fortran.

This installation includes updated versions of many currently installed packages. The updates include:

  • MKL – updated to 11.0.1
  • TBB – updated to 4.1
  • IPP – updated to 7.1.1
  • Compilers (C, C++, Fortran) – updated to 13.2.146

To use the new or updated software, please load whichever modules are appropriate:

  • intel/13.2.146 (loads the C, C++, and Fortran compilers)
  • vtune/2013xe (loads VTune)
  • advisor/2013xe (loads Advisor)
  • inspector/2013xe (loads Inspector)
  • tbb/4.1 (loads the Thread Building Blocks)
  • ipp/7.1.1 (loads the Performance Primitives)
  • mkl/11.0.1 (load the Math Kernel Library)

For information on using VTune, Inspector, Advisor, or any of the Intel tools, see the Intel Cluster Studio XE site.

PACE Maintenance Day Underway

Posted by on Tuesday, 16 April, 2013

Maintenance day has begun at 6:00am on April 16, 2013.
No users will be allowed to login and no jobs may be submitted until maintenance day is complete.

PACE Sponsors High Performance Computing Townhall!

Posted by on Friday, 5 April, 2013

PACE sponsoring HPC Townhall

What could you do with over 25,000 computer cores? Join faculty and students at the April 30 High Performance Computing Town Hall to find out. The event will be held in the MaRC auditorium and is sponsored by PACE, Georgia Tech’s Advanced Computing Environment program.

When: April 30, 3-5pm
Where: MaRC Auditorium (Map to location)


PACE provides researchers with a robust computing platform that enables faculty and students to carry out research initiatives without the burden of maintaining infrastructure, software, and dedicated technicians. The program’s services are managed by OIT’s Academic & Research Technologies department and include physical hosting, system management infrastructure, high-speed scratch storage, home directory space, commodity networking, and common HPC software such as RedHat Enterprise Linux, VASP, LAMMPS, BLAST, Matlab, Mathematica, and Ansys Fluent. Various compilers, math libraries and other middleware is available for those who author their own codes.  All of these resources are designed and offered with the specific intention of combining intellect with efficiency, in order to advance the research presence here at Tech to the peak of its abilities.

There are many ways to participate with PACE.  With a common infrastructure, we support clusters dedicated to individual PIs or research groups, clusters that are shared amongst participants and our FoRCE Research Computing Environment (aka “The FoRCE”).  The FoRCE is available to all campus users via a merit-based proposal mechanism.

The April 30 HPC Town Hall is open to members of the Tech research community and will feature presentations on the successes and challenges that PACE is currently experiencing, followed by a panel discussion and Q&A.

For more information on the PACE program, visit the official website at, and also the program’s blog at

Agenda (To Be Finalized Soon)

  • Message from Georgia Tech’s CTO Ron Hutchins
  • Message from PACE’s director Neil Bright
  • Lightning Talks By Faculty
  • Discussion around technologies and capabilities currently under investigation by PACE
  • Panel Discussion regarding future directions for PACE
  • Question and Answer Session