PACE A Partnership for an Advanced Computing Environment

April 18, 2020

[Resolved again] Rich scratch mount down

Filed under: Uncategorized — Michael Weiner @ 6:35 pm

[Update 4/19/20 7:15 AM]

In coordination with our support vendor, we restored access to all scratch volumes at approximately 11:30 PM last night. Users on the affected scratch volumes should check any jobs that ran yesterday and resubmit if the job failed.
We are continuing to work with the support vendor to determine the source of the issue and make hardware changes to improve reliability of the scratch system in Rich going forward. Thank you for your patience yesterday. Please contact us at with any remaining concerns.


[Update 4/18/20 8:00 PM]

We are experiencing ongoing issues with our scratch filesystem. Users on volumes 1, 2, and 6 of scratch are currently unable to access their scratch directories. Volumes 0, 3, 4, 5, 7, 8, and 9 are unaffected.
You can identify your scratch volume by running the command “ll” in your home directory and looking for the scratch symbolic link’s destination. The volume is a digit 0-9 immediately preceding a slash and then your username at the end of the path.
e.g. “scratch -> /gpfs/scratch1/8/gburdell3” means that George is in scratch volume 8.

We are currently working to repair access to scratch and will update you when that is complete. We apologize for the continued disruption.


[Update 4/18/20 5:15 PM]

We have restored access to the GPFS mounted scratch filesystem in Rich, and compute nodes are again online and accepting jobs.
During a routine disk swap this morning, one of the dual controllers needed to be restarted, which caused an unexpected disruption. The system was automatically offlined to preserve data integrity. We have recovered and verified the filesystem, and nodes are back online. Users should check any jobs that were running earlier today, especially those that were accessing scratch, and resubmit if the job failed.
A few nodes will need additional fixes and remain offline. These will be released individually as they are repaired.
Please note that systems in Coda (Hive and testflight-coda) were unaffected. CUI/ITAR clusters in Rich were also unaffected.
Again, we apologize for the disruption. Please contact us at with any remaining concerns.


[Original Post]

The GPFS mounted scratch system (~/scratch) in Rich is currently down again. This means that you cannot currently access your scratch directory, and jobs writing to scratch will fail.
Due to the loss of the scratch mount, most PACE nodes are now marked “down or offline” to prevent new jobs from starting and failing.
We are working to restore the mount and will update you when a repair is in place. We apologize for the disruption.

PACE systems in Coda (Hive and testflight-coda) are unaffected.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress