PACE A Partnership for an Advanced Computing Environment

March 31, 2023

Phoenix Project & Scratch Storage Cables Replacement

Filed under: Uncategorized — Marian Zvada @ 4:53 pm

WHAT’S HAPPENING?
One cable connecting Phoenix Lustre device to controller 0 and second cable to controller 1, hosting project and scratch storage, both needs to be replaced. Cables will be replaced one at a time, taking about 3 hours to complete the work.

WHEN IS IT HAPPENING?
Monday, April 3rd, 2023 starting 9AM EDT.

WHY IS IT HAPPENING?
Required maintenance.

WHO IS AFFECTED?
Potential storage access outage and subsequent temporary decreased performance to all users.

WHAT DO YOU NEED TO DO?
Since there is a redundant controller when doing work on one cable at the time, there should not be an outage during the cable replacement. In case of storage unavailability during replacement becomes an issue, your job may fail or run without making progress. If you have such a job, please cancel it and resubmit it once storage availability is restored.

WHO SHOULD YOU CONTACT FOR QUESTIONS?
For questions, please contact PACE at pace-support@oit.gatech.edu.

Hive Project & Scratch Storage Cable Replacement

Filed under: Uncategorized — Marian Zvada @ 4:53 pm

WHAT’S HAPPENING?
Two cables connecting one of the two controllers of the Hive Lustre device need to be replaced. Cables will be replaced one at a time, taking about 3 hours to complete the work.

WHEN IS IT HAPPENING?
Monday, April 3rd, 2023 starting 9AM EDT.

WHY IS IT HAPPENING?
Required maintenance.

WHO IS AFFECTED?
Potential storage access outage and subsequent temporary decreased performance to all users.

WHAT DO YOU NEED TO DO?
Since there is a redundant controller when doing work on one cable at the time, there should not be an outage during the cable replacement. In case of storage unavailability during replacement becomes an issue, your job may fail or run without making progress. If you have such a job, please cancel it and resubmit it once storage availability is restored.

WHO SHOULD YOU CONTACT FOR QUESTIONS?
For questions, please contact PACE at pace-support@oit.gatech.edu.

Connecting new cooling doors to power

Filed under: Uncategorized — Marian Zvada @ 4:51 pm

[Updated 2023/04/04, 12:25PM ET]

Electricians needed to complete some additional checks before performing the final connection, so the task has been re-scheduled for Thursday, 6-April.

[Original post 2023/03/31, 4:51PM ET]

WHAT’S HAPPENING?
In order to complete the Coda data center expansion on time and under budget, low risk electrical work will be performed, to connect the additional 12 uSystems cooling doors will be wired to the distribution panels, and left powered off. Adding the circuit breaker is the only work on the “powered” side of the circuits.

WHEN IS IT HAPPENING?
Tuesday, April 4th, 2023, the work will be performed during business hours.

WHY IS IT HAPPENING?
Required maintenance.

WHO IS AFFECTED?
None of the user’s jobs processing. The connection work is very low risk, most of the work will be done on the “unpowered” side of the panel. Worst case scenario is that we’ll lose power to up to 20 cooling doors, which expected to be recovered in less than 1 minute. If it takes longer than 5 minutes, we will initiate an emergency power down on the affected nodes.

WHAT DO YOU NEED TO DO?
Nothing.

WHO SHOULD YOU CONTACT FOR QUESTIONS?
For questions, please contact PACE at pace-support@oit.gatech.edu.

March 16, 2023

Phoenix project storage outage

Filed under: Uncategorized — Michael Weiner @ 2:48 pm

[Updated 2023/03/17 3:30 PM]

Phoenix project storage is again available, and we have resumed the scheduler, allowing new jobs to begin. Queued jobs will begin as resources are available.

The storage issue arose when one metadata server rebooted shortly after 1:00 PM yesterday, and the high-availability configuration automatically switched to the secondary server, which became overloaded. After extensive investigation yesterday evening and today, in collaboration with our storage vendor, we identified and stopped a specific series of jobs heavily taxing storage and also replaced several cables to fully restored Phoenix project storage availability.

Jobs that were running as of 1:00 PM yesterday that will fail or have failed due to the project storage outage will be refunded to the charge account provided. Please resubmit these failed jobs to Slurm to continue research.

Thank you for your patience as we repaired project storage. Please contact us with any questions.

[Updated 2023/03/16, 11:55PM ET]

We’re still experiencing significant slowness of the filesystem. We’re going to keep job scheduling paused for tonight and PACE team will resume troubleshooting in the morning as early as possible.

[Updated 2023/03/16, 6:50PM ET]

Troubleshooting continues with the vendor’s assistance. The file system is currently stable, but one of the meta data servers continues with an abnormal workload. We are working to resolve this issue to avoid additional file system failures.

[Original post 2023/03/16, 2:48PM ET]

Summary: Phoenix project storage is currently unavailable. The scheduler is paused, preventing any additional jobs from starting until the issue is resolved.

Details: An MDS server for the Phoenix Lustre parallel filesystem for project storage has encountered errors and rebooted. The PACE team is investigating at this time and working to restore project storage availability.

Impact: Project storage is slow or unreachable at this time. Home and scratch storage are not impacted, and already-running jobs on these directories should continue. Those jobs running in project storage may not be working. To avoid further job failures, we have paused the scheduler, so no new jobs will start on Phoenix, regardless of the storage used.

Thank you for your patience as we investigate this issue and restore Phoenix storage to full functionality.

For questions, please contact PACE at pace-support@oit.gatech.edu.

March 7, 2023

New compute nodes on the Phoenix cluster

Filed under: Uncategorized — Marian Zvada @ 12:45 pm

In the month of February, we added several compute nodes to the Phoenix cluster. This will give the Phoenix users the opportunity to use more powerful nodes for their computations, and to decrease the waiting time for high-demand hardware.

There are three groups of new nodes:

  1. 40 32-core Intel-CPU high-memory nodes (768 GB of RAM per node). These nodes are part of our “cpu-large” partition, and this addition increases the number of “cpu-large” nodes from 68 to 108. The nodes have Dual Intel Xeon Gold 6226R processors @ 2.9 GHz (with 32 instead of 24 cores per node). Any jobs that require more than 16 GB of memory per CPU will end up on the nodes from the “cpu-large” partition.
  2. 4 128-core AMD CPU nodes with 128 cores per node. These nodes are part of our “cpu-amd” partition, and this addition increases the number of “cpu-amd” nodes from 4 to 8. The cores are Dual AMD Epyc 7713 processors @ 2.0 GHz (128 cores per node) with 512 GB of memory. For comparison, most of the older Phoenix compute nodes have 24 cores per node (and have Intel processors rather than AMD). To target these nodes specifically, you can specify the flag “-C amd” in your sbatch script or salloc command:https://docs.pace.gatech.edu/phoenix_cluster/slurm_guide_phnx/#amd-cpu-jobs
  3. 7 64-core AMD CPU nodes with Nvidia A100 GPUs (two GPUs per node) with 40 GB of GPU memory. These nodes are part of our “gpu-a100” partition, and this addition increases the number of “gpu-a100” nodes from 5 to 12. These nodes have Dual AMD Epyc 7513 processors @ 2.6 GHz (64 cores per node) with 512 GB of RAM. To target these nodes, you can specify the flag “–gres=gpu:A100:1” (to get one GPU per node) or “–gres=gpu:A100:2” (to get both GPUs for each requested node) in your sbatch script or salloc command:https://docs.pace.gatech.edu/phoenix_cluster/slurm_guide_phnx/#gpu-jobs

To see the up-to-date specifications of the Phoenix compute nodes, please refer to our website: 

https://docs.pace.gatech.edu/phoenix_cluster/slurm_resources_phnx/

If you have any other questions, please send us a ticket by emailing pace-support@oit.gatech.edu.

Powered by WordPress