GT Home : : Campus Maps : : GT Directory

Author Archive

[Resolved] Shared scheduler problems

Posted by on Sunday, 22 July, 2018
Update (07/22/2018, 2:30am): The scheduler is back in operation again after we cleared a large number of jobs submitted by a user. We’ll continue to monitor the system for similar problems and work with users to normalize their workflows.
The shared scheduler has been going through some difficulties, which looks like due to large number of job arrays submitted recently. We don’t know the exact cause yet, but we are aware of the problems currently working on a resolution.
Until this issue is resolved, commands like qsub and qstat will not work, and showq will return an incomplete list of jobs.
This problem only applies to job submission and monitoring , your running and queued jobs are safe otherwise.

The PACE Scratch storage just got faster!

Posted by on Friday, 20 July, 2018
We have made some improvements to the scratch file system, namely by adding SSD drives to be used for faster metadata management and data storage. We are pleased to report that this strategic allocation of relatively small number of SSDs yielded impressive performance improvements, more than doubling the write and read speeds (according to our standard benchmarks).
This work, performed under the guidance of the vendor, didn’t require any downtime and no jobs were impacted.
We hope you’ll enjoy the increased performance for a faster, better research!


[Resolved] Datacenter cooling problem with potential impact on PACE systems

Posted by on Friday, 29 June, 2018

Update (06/29/2018, 3:30 pm): We’re happy to report that the issues with cooling systems are largely addressed without any visible impact on systems and/or running jobs. The schedulers are resumed, allocating new jobs as they are submitted. There is more work to be done to resolve the issue fully, but it can be performed without any disruption to services. You may continue to use PACE systems as usual. If you notice any problems, please contact

For a related status update from OIT, please see:

Original post:

The operations team notified PACE of cooling problems that started around noon today, impacting the datacenter housing the storage and virtual machine infrastructure. We immediately started monitoring the temperatures and turning off some non-critical systems as a precautionary step, and paused schedulers to prevent new jobs from running. Submitted jobs will be held until the problem is sufficiently addressed.

Depending on the course of this issue, there is a possibility that we may need to power down critical systems such as storage and Virtual Headnodes, but all critical systems are currently online for now.

We will continue to provide updates as we have them here on this blog and pace-available email list as needed.

Thank you!



Possible Water Service May Impact PACE Clusters

Posted by on Monday, 4 June, 2018
You probably saw the announcement from Georgia Tech Office of Emergency Management (copied below). Our knowledge on the matter is limited to this message, but as far as we can understand a complete outage is unlikely, but still within possibility.

Impact on PACE Clusters:

In the event of a large scale outage, PACE datacenter cooling systems will stop working and we will need to urgently shutdown all systems, including but not limited to compute nodes, login nodes and storage systems as an emergency step. This will impact all of the running jobs and active sessions.
We’ll continue to keep you updated. Please check this blog for the most up-to-date information.


Original communication from Georgia Tech Office of Emergency Management:

To the campus community:

Out of an abundance of caution, Georgia Tech Emergency Management and Communications has taken steps to prepare the campus for the possibility of a water outage tonight in light of needed repairs to the City of Atlanta’s water lines.

The City of Atlanta’s Department of Watershed will repair a major water line beginning tonight between 11 p.m. and midnight. The repair is scheduled to be completed this week and should not negatively impact campus. If all goes according to plan, the campus will operate as usual.

In the event the repairs cause a significant loss of water pressure or loss of water service completely, the campus will be closed and personnel will be notified through the Georgia Tech Emergency Notifications System (GTENS).

If GTENS alerts are sent, essential personnel who are pre-identified by department leadership should report even if campus is closed. If the campus loses water, all non-essential activities will be canceled on campus.

Those with specialized research areas need to make arrangements tonight in the event there is a water failure. All lab work and experiments that can be delayed should be planned for later in the week or next week.

In the event of an outage, employees are asked to work with department leadership to work remotely. Employees who can work remotely should prepare before leaving work June 4 to work remotely for several days. Toilets won’t be operational, drinking water will not be available, and air conditioning will not be functioning in buildings on campus and throughout the city.

All who are housed on campus should fill bathtubs and other containers to have water on hand to manually flush toilets should there be a loss in pressure. Plans are underway to relocate campus residents to nearby campuses such as Emory University or Kennesaw State University in the event of a complete loss of water to the campus.

Parking and Transportation Services will continue on-campus transportation as long as the campus is open.

In the event of an outage, additional instructions and information on campus operations will be shared at

Major Outage of GT network on Sunday, May 27

Posted by on Thursday, 24 May, 2018

OIT Operations team informed us about a service outage on Sunday (5/27, 8am). Their detailed note is copied below.

This outage should not impact running jobs, however you will not be able to login to headnodes and VPN for the duration of this outage.

If you have ongoing data transfers (using SFTP, scp, rsync), they *will* be terminated. We strongly recommend waiting until successful completion of this work before starting any large data transfers. Similarly, your active SSH connections will be interrupted, please save your work and exit all sessions as you can.

PACE team will be in contact with the Operations team and provide status updates in this blog post as needed:

More details:

There will be a major service disruption to Georgia Tech’s network due to a software upgrade to a core campus router beginning on Sunday, May 27 at 8:00 a.m. Overall, network traffic from on campus to off and off campus to on will also be affected. Some inter-campus traffic will remain up during the work, but most services will not be available.While the software upgrade is expected to be complete by 9:00 a.m., and most connectivity restored, there may be outages with various centrally provided services. Therefore, a maintenance window is reserved from 8:00 a.m. until 6 p.m. The following services may be affected and therefore not available. These include, but are not limited to CAS (, VPN, LAWN (GTwifi, eduroam, GTvisitor), Banner/Oscar, Touchnet/Epay, Buzzport, Email (delayed delivery of e-mail but no e-mail lost), Passport, Canvas, Resnet network connectivity, Vlab, T-Square, DegreeWorks, and others.Before services go down, questions can be sent to or via phone call at 404-894-7173.  During the work, please visit for updates. Our normal status update site,, will not be available during this upgrade. After the work is completed, please report issues to the aforementioned e-mail address and phone number or call OIT Operations at 404-894-4669 for urgent matters.The maintenance consists of a software upgrade to a core campus router that came at the recommendation of the vendor following an unexpected error condition that caused a brief network outage earlier this week. “We expect the network connectivity to be restored by noon, and functionality of affected campus services to be recovered by 6:00 PM on Sunday May 27, though many services may become available sooner,” says Andrew Dietz, ITSM Manager, Sr., Office of Information Technology (OIT).We apologize for the inconvenience this may cause and appreciate your understanding while we conduct this very important upgrade.


Storage (GPFS) slowness impacting pace1 and menon1 systems

Posted by on Friday, 18 May, 2018

update (5/18/2018, 4:15pm): We’ve identified a large number of jobs overloading the storage and worked with their owners to delete them. This resulted in an immediate improvement in performance. Please let us know if you observe any of the slowness comes back over the weekend.

original post: PACE is aware of GPFS (storage) slowness that impacts a large fraction of users from the pace1 and menon1 systems. We are actively working, with guidance from the vendor, to identify the root cause and resolve this issue ASAP.

This slowness is observed from all nodes mounting this storage, including headnodes, compute nodes and the datamover.

We believe that we’ve found the culprit, but more investigation is needed for verification. Please continue to report any slowness problems to us.

PACE clusters ready for research

Posted by on Friday, 11 May, 2018

Our May 2018 maintenance ( is complete ahead of schedule. We have brought compute nodes online and released previously submitted jobs. Login nodes are accessible and your data are available. As usual, there are a small number of straggling nodes we will address over the coming days.

Our next maintenance period is scheduled for Thursday, Aug 9 through Saturday, Aug 11, 2018.


Job-specific temporary directories (may require user action): Complete as planned. Please see the maintenance day announcement (  to see how this impacts your jobs.

ICE (instructional cluster) scheduler migration to a different server (may require user action): Complete as planned. Users should not notice any differences.

Systems Maintenance

ASDL cluster (requires no user action): Complete as planned.   Bad CMOS batteries are replaced and the fileserver has a replacement CPU. Memory problems were related to bad CPU, which are resolved without changing any Memory DIMMs.
Replace PDUs on Rich133 H37 Rack (requires no user action): Deferred per the request of cluster owner.

LIGO cluster rack replacement (requires no user action): Complete as planned.


GPFS filesystem client updates on all of the PACE compute nodes and servers (requires no user action): Complete as planned, and tested. Please report any missing storage mounts to pace-support.
Run routine system checks on GPFS filesystems (requires no user action): Complete as planned, no problems found!

The IB network card firmware upgrades (requires no user action): Complete as planned.
Enable 10GbE on physical headnodes (requires no user action): Complete as planned.
Several improvements on networking infrastructure (requires no user action): Complete as planned.


[Resolved] Large Scale Storage Problems

Posted by on Thursday, 3 May, 2018

Current Status (5/3 4:30pm): Storage problems are resolved, all compute nodes are back online, accepting jobs. Please resubmit crashed jobs and contact if there is anything we can assist with.

update (5/3 4:15pm): We found that the storage failure was caused by a series of tasks we have been performing with guidance from the vendor, in preparation for the maintenance day. These steps were considered safe and no failures were expected. We are still investigating to find more about which step(s) lead to this cascading failure.

update (5/3 4:00pm): All of the compute nodes will appear offline and will not accept jobs until this issue is resolved.


Original Message:

We received reports of the main PACE storage (GPFS) failures around 3:30pm today (5/3, Thr), impacting jobs. We found that this issue applies to all GPFS systems (pace1, pace2, menon1), with a large scale impact PACE-wide.

We are actively working with the vendor to resolve this issue urgently and will continue to update this post as we find more about the root cause.

We are sorry for this inconvenience and thank you for your patience.



PACE quarterly maintenance – (May 10-12, 2018)

Posted by on Monday, 30 April, 2018

The next PACE maintenance will start on 5/10 (Thr) and may take up to 3 days to complete, as scheduled.

As usual, jobs with long walltimes will be held by the scheduler to prevent them from getting killed when we power off the systems that day. These jobs will be released as soon as the maintenance activities are complete. You can reduce the walltime of such jobs to ensure completion before 6am on 5/10 and resubmit if this will give them enough time to complete successfully.

We will follow up with a more detailed announcement with a list of planned maintenance tasks with their impact on users, if any. If you miss that email, you can still find all of the maintenance day related information in this post, which will be actively updated with the details and progress.

List of Planned Tasks




  • Job-specific temporary directories (may require user action): We have been receiving reports of  nodes getting offline due to files left over from jobs filling up their local disk. To address this issue, we will start employing a scheduler feature that creates job-specific temporary directories, which are automatically deleted after the job is complete. In this direction, we created a “/scratch” folder on all nodes. Please note that this is different from your scratch directory in your home (note the difference between ‘~/scratch’ and ‘/scratch’). We ensured that if the node has a separate (larger) HD or SSD on the node(e.g. biocluster, dimer, etc), /scratch will be located on it to offer more space.

Without needing any specific user action, the scheduler will create a temporary directory uniquely named after the job under /scratch. For example:


And assign the $TMPDIR environment variable (which is normally ‘/tmp’) to point to this path.

You can creatively use $TMPDIR in your scripts. For example if you have been creating temporary directories under /scratch manually before, e.g. ‘/tmp/mydir123’, please use “$TMPDIR/mydir123” from now on to ensure that this directory will be deleted after the job is complete.

  • ICE (instructional cluster) scheduler migration to a different server (may require user action): We’ll move the scheduler server we use for the ICE queues on a new machine that’s better suited for this service. This change will be completely transparent from the users and there will be no changes in the way jobs are submitted. Jobs that are waiting in the queue will need to be resubmitted and we’ll contact the users separately for that. If you are not a student using ICE clusters, then you will not be affected from this task in any way.


Systems Maintenance


  • ASDL cluster (requires no user action)We’ll replace some failed CMOS batteries on several compute nodes, replace a failed CPU and add more memory on the file server.
  • Replace PDUs on Rich133 H37 Rack (requires no user action): We’ll replace PDUs on this rack, which includes nodes from a single dedicated cluster with no expected impact on other PACE users or clusters even if something goes wrong.
  • LIGO cluster rack replacement (requires no user action): We’ll replace the LIGO cluster rack with a new one with new power supplies.




  • GPFS filesystem client updates on all of the PACE compute nodes and servers (requires no user action): The new version is tested, but please contact if you notice any missing mounts, failing data operations or slowness issues after the maintenance day.
  • Run routine system checks on GPFS filesystems (requires no user action): As usual, we’ll run some file integrity checks to find and fix filesystem issues, if any. Some of these checks take a long time and may continue to run after the maintenance day, with some impact on performance, although minimal.




  • The IB network card firmware upgrades (requires no user action)The new version is tested, but please contact if you notice failing data operations or crashing MPI jobs after the maintenance day.
  • Enable 10GbE on physical headnodes (requires no user action)Physical headnode (e.g. login-s, login-d, coc-ice, etc) will be reconfigured to use 10GbE interface for faster networking.
  • Several improvements on networking infrastructure (requires no user action)We’ll reconfigure some of the links, add additional uplinks and replace fabric modules on different components of the network to improve reliability and performance of our network.




[RESOLVED] PACE Storage Problems

Posted by on Wednesday, 28 March, 2018

Update (3/29, 11:00am): We continue to see some problems overnight and this morning. It’s important to mention that these back-to-back problems, namely power loss, network, GPFS storage failures and readonly headnodes, are separate events. Some of these could be related, and they probably are, and network is the most likely culprit. We are still investigating with the help of storage and network teams.

The readonly headnodes is an unfortunate outcome of VM storage failures. We restored these system and VM storages and will start rebooting the headnodes shortly. We can’t tell for sure that these events will not recur. Frequent reboots of headnodes and denied logins should be expected while we are recovering these systems. Please be mindful of these possibilities and save your work frequently, or refrain from using headnodes for anything but submitting jobs.

The compute nodes appear to be mostly stable, although we identified several with leftover storage issues.

Update (3/28, 11:30pm):  Thanks to instant feedback from some of the users, we identified a list of headnodes that got read only because of the storage issues. We started rebooting them for filesystem checks. This process may take more than an hour to complete.

Update (3/28, 11:00pm):  At this point, we resolved the network issues, restored storage systems and brought back compute nodes, which started running jobs.

We believe that the cascading issues were triggered by a network problem, we will continue to monitor the systems and continue to work with the vendor tomorrow  to find out more.

Update (3/28, 9:30pm): All network and storage related issues are addressed, we started bringing nodes back online and running tests to make sure they are healthy and can run jobs.

Original Post:

As several of you already noticed and reported, PACE main storage systems are experiencing problems. The symptoms indicate a wide scale network event and we are working with the OIT Network Team to investigate this issue.

This issue has potential impact on jobs, so please refrain from submitting new jobs until all systems and services are stabilized again.

We don’t have an estimated time for resolution yet, but will continue to update this blog with the progress.