PACE A Partnership for an Advanced Computing Environment

June 29, 2021 is disabled on PACE Clusters — please email pace-support directly for inquiries

Filed under: Uncategorized — Semir Sarajlic @ 9:35 am

Dear PACE Users,

It has come to our attention that we are not receiving support requests generated by the script, which allows submission of support tickets directly from PACE clusters. Our investigation is ongoing.

At this time, please email us at from a non-PACE system for all support requests, to ensure that we receive your message.

From our initial investigation, it appears that this outage began at some point in May. We apologize for any lost messages since then. If you have been trying to reach us via the pace-support script, please email us instead. You should receive an automated acknowledgement email from Service Desk when your request is successfully processed.

Please contact us at with questions.

The PACE Team

June 25, 2021

[Urgent] Hive Cluster Storage Controller Cable Replacement – Performance Impact

Filed under: Uncategorized — Semir Sarajlic @ 5:13 pm

[Update – 06/25 11:40PM]

The storage controller cable on Hive cluster was replaced this evening and brought back online.  Unfortunately, after the repairs, GPFS storage mounts became unavailable, which had interrupted users’ running jobs this evening.   We’ve paused the scheduler briefly while we restarted the GPFS services across the cluster.  The storage mounts were restored, and scheduler has been resumed.

User’s jobs that have been running/queued between about 7:00pm and 10:30pm today (6/25/2021)  may have been interrupted, and we recommend the users to check on their jobs and resubmit your jobs as needed.  Please accept our sincerest apology for this inconvenience.

We will continue to monitor the services and update as needed.  If you have any questions, please contact us at

[Original Message – 06/25 5:12PM]

Dear Hive Users,

We are reaching out to inform you that one of our storage controllers for Hive cluster has a bad cable that needs to be replaced to ensure optimal performance and data integrity.   We have the cable at hand, and are in a process of replacing this cable this evening, Friday 06/25/2021.  This work will impact storage performance briefly, which users may experience as storage slowness as we are routing all our traffic to a secondary controller during this operation. 

What’s happening and what we are doing:  More specifically, PACE has assessed a high failure rate of the disks in one of the enclosures for the storage controller with a bad cable.  As a precaution, we will be shutting down the controller with the bad cable to unfail the disks and to ensure data integrity of the system.  We will work on replacing the cable this evening during which the controller will be shutdown.  During this work, all storage traffic will be routed to a secondary controller that is fully operational.   Given the anticipated load on the secondary controller, we anticipate users experiencing performance degradation.  

How does this impact me:  With only one storage control in operation, users may experience storage slowness.  In a highly unlikely event, this could cause downtime to the storage which would impact all users’ running jobs; however, we do not anticipate any storage outage during this operation.

What we will continue to do:  PACE team will work on the cable replacement and restore the storage to optimal operation, and update the community as needed. 

Please accept our sincere apology for any inconvenience that this  may cause you.  If you have any questions or concerns, please direct them to


The PACE Team

June 22, 2021

Phoenix scratch storage update

Filed under: Uncategorized — Michael Weiner @ 5:06 pm

We would like to remind you about scratch storage policy on Phoenix. Scratch is designed for temporary storage and is never backed up. Each week, files not modified for more than 60 days are automatically deleted from your scratch directory. As part of Phoenix’s start-up, regular cleanup of scratch has now been implemented. Each week, users with files set to be deleted receive a warning email about files to be deleted in the coming week, with additional information included. Those of you who used PACE prior to the migration to Phoenix or who use Hive are already familiar with this workflow.

Some of you will receive such an email this week. The first deletion of old scratch files in Phoenix will occur on July 7, covering files noted in these messages. We are extending the time beyond the normal one-week notification for this first round to give you time to adjust to this weekly process again.

Phoenix project storage is the intended location for your important research data. You can find out more about Phoenix storage at

Please contact us at with any questions about how to manage your data stored on Phoenix.

June 18, 2021

[Resolved] Hive scheduler outage

Filed under: Uncategorized — Michael Weiner @ 5:14 pm

The Hive scheduler experienced an outage this afternoon, as the resource and workload managers were unable to communicate. Our team identified the issue as relating to a missing library file and corrected the issue, restoring functionality at approximately 5 PM today.
Jobs submitted this afternoon would not have been able to start until the repair was implemented. Already-running jobs should not have been affected.
Please contact us at with any questions.

June 12, 2021

[Resolved] Phoenix scratch outage

Filed under: Uncategorized — Michael Weiner @ 4:27 pm

[Update 6/12/21 6:30 PM]

Phoenix Lustre scratch has been restored. We paused the scheduler at 4:40 PM to prevent additional jobs from starting and resumed scheduling at 6:20 PM. As noted, please contact us with the job number for any job that began prior to 4:40 PM and was affected by the scratch outage, in order to receive a refund.

[Original post, 6/12/21 4:30 PM]

We are experiencing an outage on Phoenix’s Lustre scratch storage. Our team is currently investigating and has confirmed that this issue is related to the scratch mount and does not affect home or project storage. Users may be unable to list, read, or write files in their scratch directories.
If your running job has failed or runs without producing output as a result of this outage, please contact us at with the affected job number(s), and we will refund the value of the job(s) to your charge account. Please refrain from submitting additional jobs utilizing your networked Lustre scratch directory until the service is repaired, in order to avoid increasing the number of failed jobs.

Powered by WordPress