PACE A Partnership for an Advanced Computing Environment

September 8, 2017

Campus preparedness and hurricane Irma

Filed under: Uncategorized — admin @ 8:45 pm

Greetings PACE community,

As hurricane Irma makes its way along the projected path through Florida and into Georgia, I’d like to let you know what PACE is doing to prepare.

OIT Operations will be closely monitoring the path of the storm and any impacts it might have on the functionality of computer rooms in the Rich Computer Center and our backup facility on Marietta Street. In the event that either of these facilities were to loose power, they will enact emergency procedures and react as best as possible.

What does this mean for PACE?

The room where we keep the compute nodes only has a few minutes of battery protected power. While this is plenty to ride through any momentary glitches in power, it only lasts a few minutes. In the event of a power loss, compute nodes will power down and terminate whatever jobs are running. The rooms where we keep our servers, storage and backups have additional generator power which can keep them running longer. This too is a finite resource. In the event of power loss, PACE will begin orderly shutdown of servers and storage in order to reduce the chance of data corruption or loss.

Bottom line is that our priority will be to protect the critical research data, and enable successful resumption of research once power is restored.

Where to get further updates?

Our primary communications channels remain our mailing list, pace-availability@lists.gatech.edu, and the PACE blog (https://blog.pace.gatech.edu). However, substantial portions of the IT infrastructure required for these to operate are also located in campus data centers. Additionally, OIT employs a cloud-based service to publish status updates. In the event that our blog is unreachable, please visit https://status.gatech.edu.

September 2, 2017

GPFS problem (resolved)

Filed under: Uncategorized — admin @ 12:52 am

This was much ado about nothing.  Running jobs continued to execute normally through this event, and no data was at risk.  What did happen is that jobs that could potentially have started were delayed.

A longer explanation –

We have monitoring agents that prevent jobs from starting if they detect a potential problem with the system.  The idea is to avoid starting a job if there’s a known reason that would cause a crash.  During our last maintenance period, we brought a new DDN storage system online and configured these agents to watch it for issues.  It did develop an issue, the monitoring agents flagged it and took nodes offline to new jobs.  However, we have yet to put any production workloads on this new storage so no running jobs were affected.

At the moment, we’re pushing out a change to the monitoring agents to ignore the new storage.  As this finishes rolling out, compute nodes will come online and resume normal processing.  We’re also working with DDN to address the issue on the new storage system.

September 1, 2017

GPFS Problem

Filed under: Uncategorized — Semir Sarajlic @ 9:01 pm

We are actively debugging a GPFS storage problem on our systems that unfortunately brought many queues offline. We do not yet fully know the cause and solution, but will update as soon as possible.

We apologize for the inconvenience and are actively working on a solution.

Powered by WordPress