GT Home : : Campus Maps : : GT Directory

Archive for category Uncategorized

[Resolved] Temporary Network Interruption

Posted by on Monday, 15 October, 2018

We experienced a failure in the primary InfiniBand subnet manager that may have impacted both running and starting jobs.   The malfunction happened in such a way that the backup IB subnet manager (SM) didn’t notice the primary was failing to operate normally. We disabled the primary SM, and the secondary SM took over as designed. The service outage lasted from 12:56pm to 01:07pm today, October 15, 2018. PACE staff will continue to investigate this failure mode and adjust the procedures to help prevent it in the future.  As this brief network interruption may have impacted the running and starting jobs, please check your jobs to identify if there are any crashed jobs and report any problems you may notice to

[Resolved] pace 1 storage problems

Posted by on Wednesday, 3 October, 2018

[Update – October 5, 2018] We worked with our vendor to address the issue impacting the network shared disk (NSD) that drastically reduced the performance of pace1 file system when it was stressed by the large number of I/O intensive jobs.  On Thursday, we had the NSD restored to normal, and our benchmarks indicate a successful resolution.  As a precaution, we will continue to monitor NSDs as the user workloads continue to resume to normal.

[Original Post – October 3, 2018]

On Monday, October 1, we started to experience slowness on our parallel file system (pace1), which was associated with users’ I/O intensive jobs.  We have engaged the users who were/are responsible for the load.  During this process, the stress on our storage and network allowed us to identify a bug with a network shared disk that is responsible for caching data that improves read/write speeds.  Currently, we have successfully deployed a workaround, which has dramatically improved the performance, and we are working with our vendor to further resolve this issue.

With this development, symptoms that you may have experienced is slowness when navigating through your files.  Your jobs should not have been impacted other than slower access to the files that may have resulted in longer execution times (i.e., wall-time).

We will update you once we have the issue fully resolved in collaboration with our vendor.  If you have any questions, please don’t hesitate to contact us at

[RESOLVED] Temporary unavailability of home directories

Posted by on Friday, 28 September, 2018

The storage servers that export PACE home directories experienced a problem at around 9:10am on September 28. We have identified and resolved the issue within 20 min after the event.

This problem caused temporary unavailability of home directories. The symptoms include hanging commands, codes and login attempts.

We believe most jobs have resumed operation after the issue is resolved, but we can’t be sure. Please check your jobs to identify if there are any crashed jobs and report any problems you may notice to



[RESOLVED] Temporary unavailability of home directories

Posted by on Wednesday, 19 September, 2018

At around 6:10pm on Sep 19, 2018 the storage servers that export PACE home directories and the software repository experienced a problem. We have identified and resolved the issue within 15 min after the event.

This problem caused temporary unavailability of home directories and applications. The symptoms include hanging commands, codes and login attempts.

We believe most jobs have resumed operation after the issue is resolved, but we can’t be sure. Please check your jobs to identify if there are any crashed jobs and report any problems you may notice to



Testflight queue transition and unavailability

Posted by on Wednesday, 12 September, 2018

As you know, the testflight queue includes nodes that are reserved for testing the systems/services that are planned to be deployed in the future.

As a part of our preparations to transition to the next OS (RHEL7) we will offline this queue, swap its nodes with newly purchased nodes (that better represent the modern systems currently in use), and finally deploy the RHEL7 on these new nodes.

Once these preparations are complete, we’ll reach out to you and ask you to test your codes. Until then, testflight will not be available and submissions will be declined.

There are currently some jobs running on this queue. We’ll wait until the current jobs complete instead of killing them, but we would like to once again emphasize that the use of testflight for production is against policy. This queue should only be used for testing purposes.

Please let us know if you have any questions.

[Resolved] File locking issues causing hanging in codes and login troubles

Posted by on Thursday, 6 September, 2018

If you have been observing mysteriously hanging codes, or trouble logging in on headnodes, please read on!

We started receiving reports for hanging processes, mostly for GPU codes. In addition, users who are using tcsh/csh shell as default had difficulties logging into nodes.

Upon further investigation, we found that a storage problem was affecting file locking mechanism on home directories (where most applications keep the configuration files, regardless of where they run).

This problem was very subtle, as it was impacting only a small number of processes and data operations appeared to be working well otherwise.

We have addressed this issue this morning (9/6, 10am) and you should no longer see hanging codes. Please report any ongoing issues to

[RESOLVED] Scratch storage problems

Posted by on Tuesday, 14 August, 2018
Update (08/15/2018): As suspected, internal data migrations were not happening automatically. We worked with the vendor to address the issue and it’s now once again safe to use the scratch storage. We’ll keep on monitoring the utilization just in case.
Original post:
We have received multiple reports of jobs crashing due to insufficient scratch storage, although the physical usage is only at %38.
We suspect that this issue is related to some disk pools that are not able to migrate data to other pools internally.
We are currently looking in to this problem. In the mean time, we recommend not using the scratch space if possible, until we have a better understanding of the situation.
Thank you, and sorry for this inconvenience.

[COMPLETE] PACE quarterly maintenance – (Aug 9-11, 2018)

Posted by on Monday, 30 July, 2018

update (Aug 10, 2018, 8:00pm): Our Aug 2018 maintenance is complete, one day ahead of schedule. All of the tasks are completed as planned. We have brought compute nodes online and released previously submitted jobs. Login nodes are accessible and your data are available. As usual, there are a small number of straggling nodes we will address over the coming days.

Please note the important changes regarding decommissioned login nodes, including the commonly used force-6 headnode.
Our next maintenance period is scheduled for Thursday, Nov 1 through Saturday, Nov 3, 2018.
Original message:

The next PACE maintenance will start on 8/9 (Thr) and may take up to 3 days to complete, as scheduled.

As usual, jobs with long walltimes will be held by the scheduler to ensure that no active jobs will be running when systems are powered off. These jobs will be released as soon as the maintenance activities are complete. You can reduce the walltime of such jobs to ensure completion before 6am on 8/9 and resubmit if this will give them enough time to complete successfully.

Planned Tasks


  • (some user action needed) Most PACE headnodes (login nodes) are currently Virtual Machines (VM) with slow response time and sub-optimal storage performance, which are often the cause of slowness.

We are in progress of replacing these VMs with more capable physical servers. After the maintenance day, your login attempts to these VMs will be rejected with a message that tells you which hostname should you be using instead. In addition, we are in the progress of sending each user a customized email with a list of old and new login nodes. Please don’t forget to configure your SSH clients to use these new hostnames.

Simply, “” will be used for all shared clusters and “” will be for dedicated clusters. You’ll notice that once you  login, you’ll be redirected to one of the several physical nodes automatically (e.g. login-s1, login-d2, …) depending on their current load.

There will be no changes to clusters which already come with a dedicated (and physical) login node (e.g. gryphon, asdl, ligo, etc)

  • (some user action needed) As some of the users have already noticed, users can  no longer edit cronjobs (e.g. crontab -e) on the headnodes. This is on purpose because the access to new login nodes (login-d and login-s) are dynamically routed to different servers depending on their load. This means, you may not be able to see the cron jobs you have installed the next time you login to one of these nodes. For this reason, only PACE admins can install the cronjobs on behalf of users to ensure consistency (only login-d1 and login-s1 will be used for crons jobs). If you need to add (or edit) cronjobs, please contact If you already have user cron jobs setup on one of the decommissioned VMs, they will be moved over to login-d1 or login-s1 during the maintenance so they’ll continue to run.


  • (no user action needed) Add a dedicated protocol node to the GPFS system to increase capacity and response time for non-InfiniBand connected systems. This system will gradually replace the IB gateway systems that are currently in operation.
  • (no user action needed) Replace batteries to DDN/GPFS storage controllers


  • (no user action needed) Upgrades to the DNS appliances in both PACE datacenters
  • (no user action needed) Add redundant storage links to specific clusters


  • (no user action needed) Perform network upgrades
  • (no user action needed) Replace devices that are out of support

[Resolved] Shared scheduler problems

Posted by on Sunday, 22 July, 2018
Update (07/22/2018, 2:30am): The scheduler is back in operation again after we cleared a large number of jobs submitted by a user. We’ll continue to monitor the system for similar problems and work with users to normalize their workflows.
The shared scheduler has been going through some difficulties, which looks like due to large number of job arrays submitted recently. We don’t know the exact cause yet, but we are aware of the problems currently working on a resolution.
Until this issue is resolved, commands like qsub and qstat will not work, and showq will return an incomplete list of jobs.
This problem only applies to job submission and monitoring , your running and queued jobs are safe otherwise.

The PACE Scratch storage just got faster!

Posted by on Friday, 20 July, 2018
We have made some improvements to the scratch file system, namely by adding SSD drives to be used for faster metadata management and data storage. We are pleased to report that this strategic allocation of relatively small number of SSDs yielded impressive performance improvements, more than doubling the write and read speeds (according to our standard benchmarks).
This work, performed under the guidance of the vendor, didn’t require any downtime and no jobs were impacted.
We hope you’ll enjoy the increased performance for a faster, better research!