GT Home : : Campus Maps : : GT Directory

Archive for December, 2010

Upcoming HPC procurement

Posted by on Thursday, 23 December, 2010

The PACE team is in the planning stages of another large procurement starting soon after the new year.  If you would like to purchase HPC equipment, or know somebody that might be interested, please let us know.  The more pots we can combine on the front end, the better discounts we expect to be able to negotiate for all.

We have some details posted on our web page, www.pace.gatech.edu/policy, but in short, starting in January we will go to our contract vendors and solicit better-than-state-contract pricing based on an estimate of known demand.  By mid-February, we expect to have a very close estimate as to what pricing individual faculty can expect.  By early March, we would like to have final commitments from faculty and will proceed with detailed configuration and finalize price negotiations.

If this proposed schedule doesn’t fit with your funding or research targets, please let us know and we can either adjust the schedule or make a purchase outside of this format.  A goal of this process to set approximate discount levels for the next 12-18 months.

upcoming changes to Fluent 6.3.x / Gambit 2.4.x support

Posted by on Thursday, 23 December, 2010

We recently received notice of a very important upcoming change to the Fluent & Gambit licensing as provided by the College of Engineering.  This will have profound effects on users of this software.

Please see the attached document from CoE support for details, but in summary:

  • Fluent 6.3.x will stop working on February 26.
  • Gambit 2.4.x will continue to function until at least February 2012.
  • Fluent users must transition to Ansys CFD.

But, there is a devil in the details.  The Ansys CFD licenses are for teaching purposes and have a limit of 512K elements.  If you need to solve some problems with a higher number of elements, an Academic Research license will have to be purchased by your group.

PACE support has installed Ansys v13 in /usr/local/packages/ansys-13.0, please begin migrating your Fluent jobs to this version.  Please let us know if you have issues, particularly the element limitations, and we can help coordinate with CoE support.

Comsol workshop – on campus

Posted by on Thursday, 23 December, 2010

Ankur Gupta from Comsol, Inc. will be hosting a hands-on workshop for Comsol on January 27, 2011 over in the MARC Auditorium.  Please visit http://www.comsol.com/events/cmw/14100 for details and registration.

Interest in a campus license for Gaussian?

Posted by on Tuesday, 21 December, 2010

I’ve had a couple of inquiries recently regarding purchases of the Gaussian electronic structure modeling software package.  [1]  There has been some question regarding the ability of GT researchers to use this software, but some initial inquiries indicate that we would be able to procure licenses for the binary distribution.

According to their published academic pricing [2], a purchase of $8,000 would entitle GT to run Gaussian on an unlimited number of computers.

This would include:

  • Gaussian 09 for Linux (executing on multicore computers)
  • TCP Linda for Linux (enabling gaussian computations across computers)
  • GaussView 5 for Linux and Windows

If you are interested in this software, please let me know (neil.bright@oit.gatech.edu) and I’ll be happy to request further details from the vendor and coordinate a purchase.  $8k is not a very high bar to reach, especially if we can spread the cost across a couple of research groups.

  1. http://www.gaussian.com
  2. http://www.gaussian.com/g_prod/prix/acad_usa.pdf

Availability of PACE staff during the holiday break

Posted by on Tuesday, 21 December, 2010

Starting Friday, December 24 PACE staff will be unavailable for the Institute holiday, resuming normal activities Monday, January 3.  We have normal staffing levels this week, excluding Friday.

For normal issues, please continue to submit through the usual support channels, and we’ll address them when campus reopens in the new year.

If you have a dire emergency, please contact the OIT NOC at 404-894-4669.

NCAR Summer Internships Parallel Computational Science

Posted by on Monday, 13 December, 2010

SUMMARY:

Summer Internships in Parallel Computational Science (SIParCS)
Computational and Information Systems Laboratory (CISL)
National Center for Atmospheric Research (NCAR)
Applications due Fri Feb 4 2011

http://www.cisl.ucar.edu/siparcs/

DETAILS:

The short version of the update is this:

  1. The 2011 summer internship program in computational science at NCAR is accepting applications for summer 2011.
  2. SIParCS application process details can be found at: http://www.cisl.ucar.edu/siparcs/
  3. It is important to note that the application deadline is Fri Feb 4 2011.  Students will be notified beginning Fri March 18, 2011.
  4. We anticipate about 10-12 slots will be available for the summer of 2011.

I’d like to also call your attention to the fact that CISL also has summer training classes in Fortran 90, data analysis and visualization for atmospheric science, and more.

Also, a visitor program for scientists and supercomputing professionals called the Research and Supercomputing Visitor Program (RSVP) can provide travel support for attendees from EPSCoR states and minority serving institutions.

The details for this program, which provides travel support for people interested in working with CISL staff over the summer are located at: http://www.cisl.ucar.edu/rsvp

The summer of 2011 will be the SIParCS program’s fifth year of operation, but it has already produced some exciting results and experiences for both the students and our staff. You can check out some of those results from last year under the Presentations tab at our website.

NCAR/UCAR is committed to providing equal opportunity for all employees and qualified applicants for employment regardless of race, color, religion, national origin, gender, sexual orientation, age, disability, marital status, veteran status, or any other characteristic protected by law.

If you have any questions about the program or the application process, please feel free to contact me.

Kind regards,

Dr Richard Loft (loft@ucar.edu)

SIParCS Director and Director of Technology Development
Computational and Information Systems Laboratory
National Center for Atmospheric Research
1850 Table Mesa Drive
Boulder, CO USA 80305

Work (303)497-1262
Fax (303)497-1298

NSF/OCI solicitation – High Performance Computing System Acquisition: Enhancing the Petascale Computing Environment for Science and Engineering

Posted by on Wednesday, 8 December, 2010

The NSF Office of Cyberinfrastructure has released a solicitation for acquisition and operation of large scale systems.  Below is an excerpt of the announcement, for further detail on proposals due March 7, 2011, please see http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503148&org=OCI&from=home.

The NSF’s vision for Cyberinfrastructure in the 21st Century includes enabling sustained petascale computational and data-driven science and engineering through the deployment and support of a world-class High Performance Computing (HPC) environment.   For the past decade the NSF has provided the open science and engineering community with state of the art HPC assets ranging from loosely coupled clusters, to large scale instruments with many thousands of computing cores communicating via fast interconnects.  Previous solicitations, as exemplified by the multi-pronged Track Two acquisitions, have provided more than two petaflops (1015 floating point operations per second) of compute power on real applications, that consume large amounts of memory, and work with very large data sets.  These resources have been made available through the TeraGrid, the world’s largest, most powerful and comprehensive distributed cyberinfrastructure for open science.  In addition to the Track Two acquisitions, the ongoing Track One program promises to deliver a petaflop of sustained power capable of tackling some of the most challenging scientific problems across multiple science and engineering domains.

HPC Resource Providers – those organizations willing to acquire, deploy and operate HPC resources in service to the science and engineering research and education community – play a key role in the provision and support of a national Cyberinfrasructure. With this solicitation, the NSF requests proposals from organizations willing to serve as HPC Resource Providers within Extreme Digital (XD), the successor to TeraGrid, and who propose to acquire and deploy new, innovative petascale HPC systems and services.

Competitive HPC systems will:

  • Expand  the range of data intensive computationally-challenging science and engineering applications that can be tackled with XD HPC services;
  • Introduce a major new innovative capability component to science and engineering research communities:
  • Provide an effective migration path to researchers scaling data and code beyond the campus level;
  • Incorporate reliable, robust system software and services essential to optimal sustained performance;
  • Efficiently provide a high degree of stability and usability by January, 2013; and
  • Complement and leverage existing XD capabilities and services.

resolved – potential Rich Data Center power issues

Posted by on Monday, 6 December, 2010

Hi folks,

The UPS failure in one of the Rich machine rooms has been repaired, and we’re back to normal.  Ultimately, we did not see any events associated.  I’ve also returned the backup schedule to our normal daily intervals.

Below is a snippit from the daily 0800 notes as released by OIT operations.

Wed 0610 – Sat 2245: The Rich 116 UPS experienced issues and went into bypass mode. Metropower contacted and determined the UPS needed to be replaced. Metropower worked on replacing the unit, and completed their work by 2245 on Saturday.

Upcoming town hall discussion on GT Supercomputing Services

Posted by on Thursday, 2 December, 2010

The campus Partnership for an Advanced Computing Environment (PACE) will be hosting an informational session with Q&A about current supercomputing capabilities on campus.   The meeting will be held in MARC room 101 on Thursday December 9 at 1:00 PM.

The PACE is a partnership between Georgia Tech faculty and the Office of Information Technology, which is focused on High Performance Computing (HPC).  PACE provides faculty participants a sustainable leading-edge HPC infrastructure with technical support services. You are encouraged to visit our new website www.pace.gatech.edu.

This meeting is open to all, and we will share information and answer questions about the following topics:

  • What equipment and services are currently available to faculty?
  • What is the cost to use these facilities (small allocations are no charge!)?
  • How can faculty participate in the partnership?
  • How are the shared resources (e.g. the FoRCE cluster) governed?

In particular, if you have any intentions of purchasing large scale research computing equipment (e.g. > $5,000) in the next 6-9 months, you may be able to leverage substantial discounts and other benefits by participating in the PACE services.

For more information, please visit www.pace.gatech.edu or email pace-support@oit.gatech.edu.  Please pass along this announcement to colleagues who may have interest in HPC resources at Georgia Tech.

potential Rich Data Center power issues

Posted by on Thursday, 2 December, 2010

Hi folks,

I wanted to make you aware of an ongoing issue in one of the Rich building Data Center rooms.  The short version is that room 116 is currently without UPS protection.  Nearly all of our PACE servers, storage and core network reside in this room.

We have yet to experience any problems at this point, and have increased the frequency of our backups as a precaution.  Below is a snippit from the daily 0800 notes as released by OIT operations.

Wed 0610-ongoing: One of the Rich machine room UPSes experiencing issues and went into bypass mode. Metropower contacted and determined the UPS needs to be replaced. Metropower working on replacing the unit, but the equipment on it is currently on street power and will be affected by any power glitches Rich may experience before the UPS is back in service. Admins should be prepared to bring any machines that go down back up in the event of a power glitch.