PACE A Partnership for an Advanced Computing Environment

September 18, 2024

PACE Phoenix Storage Hotfix – Sept 24th, 2024

Filed under: Uncategorized — Eric Coulter @ 3:30 pm

WHAT’S HAPPENING? 

Due to a recent instance of lower performance in our Project storage system (coda1), we will be working with our storage vendor to apply updates to the underlying device on Tuesday, September 24th. This should not cause any outage, but may result in decreased performance for some operations during the patch deployment. Due to the non-zero risk of outage, we will be working hand-in-hand with the vendor during this operation, and will be monitoring performance closely. Please do let us know if you observe impact to any work during that time, and we will refund jobs accordingly.                        

WHEN IS IT HAPPENING? 
The update process will begin on Tuesday morning, Sept 24th, 2024. 
We will send an announcement when the update is complete. 

WHY IS IT HAPPENING? 

Patches to the storage devices underlying Phoenix Project storage (coda1) have been recommended by the device vendor to improve reliability and performance based on recently observed degraded performance of the metadata servers on our Lustre filesystem.  

WHO IS AFFECTED? 

Phoenix users *may* experience slower performance of Phoenix Project storage during the update, and there is a low risk of outage. 

WHAT DO YOU NEED TO DO? 

Please do let us know if you observe impact to any work using the Phoenix Project filesystem (coda1) during that time, and we will refund jobs accordingly. 

WHO SHOULD YOU CONTACT FOR QUESTIONS? 

For any questions, please contact PACE at pace-support@oit.gatech.edu.

September 8, 2024

PACE-Wide Emergency Shutdown – September 8, 2024

Filed under: Uncategorized — Grigori Yourganov @ 9:11 pm

[Update 9/11/24 2:51 PM]

Dear Hive community, 

The emergency maintenance on the Coda datacenter has been completed and the Hive cluster has passed our tests. The cluster is back in production and is accepting jobs on BOTH the RHEL7 and RHEL9 environments; all jobs that were held by the scheduler have been released. 

[Update 9/11/24 10:52 AM]

Dear Firebird users,

The emergency maintenance on the Coda datacenter has been completed and the Firebird cluster has passed our tests. The cluster is back in production and is accepting jobs on BOTH the RHEL7 and RHEL9 environments; all jobs that have been held by the scheduler have been released.

As a reminder:

RHEL7 Firebird nodes are accessible at the usual address login-<project>.pace.gatech.edu. RHEL9 Firebird nodes can be accessed via ssh at login-<project>-rh9.pace.gatech.edu for testing new software. The majority of our software stack has been rebuilt for the RHEL9 environment. We strongly encourage you to test your software on RHEL9, and please let us know if anything is missing! For more information, please see our Firebird RHEL9 documentation page.

Please take the time to test your software and workflows on the RHEL9 Firebird Environment (accessible via login-<project>-rh9.pace.gatech.edu) and let us know if anything is missing!

The next Maintenance Period will be January 13-16, 2025.

[Update 9/9/24 6:00 PM]

Due to an emergency with a cooling system at the Research Hall, all PACE clusters have been shut down since the morning of Sunday, September 8, 2024. The datacenter provider, Data Bank, has identified an alternate replacement part which has been brought onsite and is in the process of being deployed/tested. At this time, we estimate that Data Bank will have restored cooling to the Research Hall by Tuesday, September 10, 2024, by close of business day. At which point, PACE will begin powering up, testing infrastructure and begin the process to bring services back online. We plan to provide additional updates on the restoration of services by Wednesday, September 11, 2024, evening.

Please visit https://status.gatech.edu for updates.

Access to head nodes and file systems is available.

[Update 9/9/24 9:00 AM]

Due to an emergency with a cooling system at the Research Hall, all PACE clusters have been shut down since the morning of Sunday, September 8, 2024. While a time frame for resolution is currently unknown, we are actively working with the vendor, Data Bank, to resolve the issue and restore service to the data center as soon as possible. We will provide updates as they are available. Please visit https://status.gatech.edu for updates. 

Access to login nodes and filesystems (via Globus, OpenOnDemand or direct connection to login nodes) is still available.

[Original Post 9/8/24]

WHAT’S HAPPENING?  

Due to an emergency with a cooling system at the Research Hall, all PACE clusters had to be shut down on the morning of Sunday, September 8, 2024. 

WHEN IS IT HAPPENING?  

Sunday, September 8, 2024, starting at 7.30 AM.EDT.  

WHY IS IT HAPPENING?  

PACE have been notified by IOC that the temperatures in the CODA building Research Hall are rising due to a failure of a water pump in the cooling system. Emergency shutdown had to be executed in order to protect equipment. The physical infrastructure provider for our datacenter is working on evaluating the situation.  

WHO IS AFFECTED?  

All PACE Users. Any running jobs on ALL PACE Clusters (Phoenix, Hive, Firebird, ICE, and Buzzard) had to be stopped at 7.30 AM. For Phoenix and Firebird, we will provide refunds for interrupted jobs on paid accounts only by default. Please let us know if this causes a significant loss of funds resulting in inability to continue work on your free-tier Phoenix allocation!   

WHAT DO YOU NEED TO DO?  

Wait patiently; we will communicate as soon as the clusters are ready to resume work.  

WHO SHOULD YOU CONTACT FOR QUESTIONS?  

For any questions, please contact PACE at pace-support@oit.gatech.edu.  

August 26, 2024

PACE-Wide Emergency Shutdown – Sept 3, 2024

Filed under: Uncategorized — Eric Coulter @ 3:36 pm

WHAT’S HAPPENING? 

It is necessary to shut down the whole cohort of PACE clusters next week to make repairs in the datacenter. 

The repair and cluster resumption will take up to 1 day to complete, requires shutting down all nodes in the research hall, and must be done in the next few days.  
 
This shutdown will NOT affect Globus access, login-node access, or access to any storage locations.  

WHEN IS IT HAPPENING? 

Tuesday, September 3rd, 2024, starting at 4 PM EDT. Compute nodes are expected to return to availability on the afternoon of Wednesday, September 4th.  

WHY IS IT HAPPENING? 

Databank, the physical infrastructure provider for our datacenter, detected an issue over the weekend where multiple cooling doors reported high temperature alerts. They traced the issue to a high team chiller sensor. It was temporarily bypassed to avoid the multiple alerts and needs to be replaced to avoid additional issues.  

This outage is necessary to prevent widespread catastrophic failure of the servers in the research hall.  

WHO IS AFFECTED? 

All PACE Users. Any running jobs on ALL PACE Clusters (Phoenix, Hive, Firebird, ICE, and Buzzard) will be stopped at 4pm on the afternoon of September 3rd, 2024. For Phoenix and Firebird, we will provide refunds for interrupted jobs on paid accounts only by default. Please let us know if this causes a significant loss of funds resulting in inability to continue work on your free-tier Phoenix allocation!  

WHAT DO YOU NEED TO DO? 

Wait patiently; we will communicate as soon as the clusters are ready to resume work. 

WHO SHOULD YOU CONTACT FOR QUESTIONS? 

For any questions, please contact PACE at pace-support@oit.gatech.edu.

July 15, 2024

PACE Maintenance Period Aug 06-09 2024

Filed under: Uncategorized — Eric Coulter @ 3:36 pm

[Update 07/31/24 02:23pm]

WHEN IS IT HAPPENING?

PACE’s next Maintenance Period starts at 6:00 AM on Tuesday, August 6th (08/06/2024) and is tentatively scheduled to conclude by 11:59 PM on Friday, August 9th (08/09/2024). An extra day is needed to accommodate additional testing needed due to both RHEL7 and RHEL9 versions of our systems as we migrate to the new Operating System. PACE will release each cluster (Phoenix, Hive, Firebird, ICE, and Buzzard, along with their associated RHEL9 environments) as soon as maintenance work and testing are completed. We plan to focus on the largest portion of each system first, to ensure access to data and compute capabilities are restored as soon as possible.

Also, we have CANCELED the November maintenance period for 2024 and do NOT plan to have another maintenance outage until early 2025.

WHAT DO YOU NEED TO DO?   

As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During this Maintenance Period, access to all the PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, ICE, and Buzzard. Please plan accordingly for the projected downtime. CEDAR storage will not be affected.

For Phoenix, we are migrating 427 nodes (~30% of the ~1400 total nodes on Phoenix) from RHEL7 to RHEL9 in August. The new RHEL9 nodes will not be available immediately after the Maintenance Period is completed but will come online the following week (August 12th – 16th). After this migration, about 50% of the Phoenix cluster will be migrated over to RHEL9, including all but 20 GPU nodes. Given this, we strongly encourage Phoenix users who have not migrated their workflows over to RHEL9 to do so as soon as possible.

WHAT IS HAPPENING?   

ITEMS REQUIRING USER ACTION: 

  • [Phoenix and Hive] Continue migrating nodes to the RHEL 9 operating system
  • Migrate 427 nodes to RHEL9 in Phoenix 
  • Migrate 100 nodes to RHEL9 in Hive 
  • [Phoenix, Hive, Firebird, ICE] GPU nodes will receive new versions of the NVIDIA drivers, which *may* impact locally built tools using CUDA. 
  • [Phoenix] H100 GPU users on Phoenix should use the RHEL9 login node to avoid module environment issues.

ITEMS NOT REQUIRING USER ACTION: 

  • [all] Databank cooling loop work, which will require shutdown of all systems 
  • [all] Upgrade to RHEL 9.4 from 9.3 on all RHEL9 nodes – should not impact user-installed software 
  • [all] Research and Enterprise Hall Ethernet switch code upgrade 
  • [all] Upgrade PACE welcome emails 
  • [all] Upgrade Slurm scheduler nodes to RHEL9 
  • [CEDAR] Adding SSSD and IDmap configurations to RHEL7 nodes to allow correct group access across PACE resources 
  • [Phoenix] Updates to Lustre storage to improve stability  
  • File consistency checks across all metadata servers, appliance firmware updates, external metadata server replacement on project storage 
  • [Phoenix] Install additional InfiniBand interfaces to HGX servers 
  • [Phoenix] Migrate OOD Phoenix RHEL9 apps 
  • [Phoenix, Hive] Enable Apptainer self-service 
  • [Phoenix, ICE] Upgrade Phoenix/Hive/ICE subnet managers to RHEL9 
  • [Hive] Upgrade Hive storage for new disk replacement to take effect 
  • [ICE] Updates to Lustre scratch storage to improve stability 
  • File consistency checks and appliance firmware updates 
  • [ICE] Retire ICE enabling rules for ECE 
  • [ICE] Migrate ondemand-ice server to RHEL9 

WHY IS IT HAPPENING?

Regular maintenance periods are necessary to reduce unplanned downtime and maintain a secure and stable system.

WHO IS AFFECTED?  

All users across all PACE clusters.

WHO SHOULD YOU CONTACT FOR QUESTIONS?   

Please contact PACE at pace-support@oit.gatech.edu with questions or concerns.

Thank you,

-The PACE Team 

[Update 07/15/24 03:36pm]

WHEN IS IT HAPPENING?  

PACE’s next Maintenance Period starts at 6:00AM on Tuesday August 6th, 08/06/2024, and is tentatively scheduled to conclude by 11:59PM on Friday August 9th, 08/09/2024. The additional day is needed to accommodate additional testing needed due to the presence of both RHEL7 and RHEL9 versions of our systems as we migrate to the new Operating System. PACE will release each cluster (Phoenix, Hive, Firebird, ICE, and Buzzard, along with their associated RHEL9 environments) as soon as maintenance work and testing is completed. We plan to focus on the largest portion of each system first, to ensure access to data and compute capabilities are restored as soon as possible.  
 
Additionally, we have cancelled the November maintenance period for 2024, and do not plan to have a maintenance outage until early 2025 

WHAT DO YOU NEED TO DO?   

As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During this Maintenance Period, access to all the PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, ICE, and Buzzard. Please plan accordingly for the projected downtime. CEDAR storage will not be affected. 

WHAT IS HAPPENING?   

ITEMS REQUIRING USER ACTION: 

  • [Phoenix and Hive] Continue migrating nodes to the RHEL 9.3 operating system.  

ITEMS NOT REQUIRING USER ACTION: 

  • [all] Databank cooling loop work, which will require shutdown of all systems 
  • [CEDAR] Adding SSSD and IDmap configurations to allow correct group access across PACE resources 
  • [Phoenix] Updates to Lustre storage to improve stability  
  • File consistency checks across all metadata servers, appliance firmware updates, external metadata server replacement on /storage/coda1 

WHY IS IT HAPPENING?  

Regular maintenance periods are necessary to reduce unplanned downtime and maintain a secure and stable system.  

WHO IS AFFECTED?  

All users across all PACE clusters.  

WHO SHOULD YOU CONTACT FOR QUESTIONS?   

Please contact PACE at pace-support@oit.gatech.edu with questions or concerns.  

Thank you,  

-The PACE Team 

July 8, 2024

Phoenix project storage outage

Filed under: Uncategorized — Michael Weiner @ 4:40 pm

[Update 7/9/24 12:00 PM]

Phoenix project storage has been repaired, and the scheduler has resumed. All Phoenix services are now functioning.

We have updated a parameter to throttle the number of operations on the metadata servers to improve stability.

Please contact us at pace-support@oit.gatech.edu if you encounter any remaining issues.

[Original Post 7/8/24 4:40 PM]

Summary: Phoenix project storage is currently inaccessible. We have paused the Phoenix scheduler, so no new jobs will start.

Details: Phoenix Lustre project storage has experienced slowness and been intermittently unresponsive at times throughout the day today. The PACE team identified a few user jobs causing high workload on the storage system, but the load remained high on one metadata server, which eventually stopped responding. Our storage vendor recommended a failover to a different metadata server as part of a repair, but the system has been left fully unresponsive. PACE and our storage vendor continue to work on restoring full access to project storage.

Impact: The Phoenix scheduler has been paused to prevent new jobs from hanging, so no new jobs can start. Currently-running jobs may not make progress and should be cancelled if stuck. Home and scratch directories remain accessible, but an ls of the full home directory may hang due to the symbolic link to project storage.

Thank you for your patience as we work to restore Phoenix project storage. Please contact us at pace-support@oit.gatech.edu with any questions. You may visit https://status.gatech.edu/ for additional updates.

July 5, 2024

IDEaS storage Maintenance

Filed under: Uncategorized — Deepa Phanish @ 1:04 pm

WHAT’S HAPPENING?

One of the  IDEaS IntelliFlash  controller cards needs to be reseated. Before reseating the card, we will failover all resources to controller B, shutdown controller A, pull the whole enclosure out and reseat the card. The activity takes about 2 hours to complete. 

WHEN IS IT HAPPENING?

Monday, July 8th, 2024, starting at 9 AM EDT.

WHY IS IT HAPPENING?

We are working with the vendor to resolve an issue discovered while debugging controllers and restore system back to a healthy status.

WHO IS AFFECTED?

Users of the IDEaS storage system will notice decreased performance since all services will be switched over to a single controller. It is possible that access will be interrupted while the switch happens. 

WHAT DO YOU NEED TO DO?

During the maintenance, data access should be preserved, and we do not expect downtime. However, there have been cases in the past where storage has become inaccessible. In case of storage unavailability during replacement becomes an issue, jobs accessing the IDEaS storage may fail or run without making progress. If you have such a job, please cancel it and resubmit it once storage can be accessed.

WHO SHOULD YOU CONTACT FOR QUESTIONS?

For any questions, please contact PACE at pace-support@oit.gatech.edu.

June 20, 2024

[OUTAGE] Phoenix Project Storage

Filed under: Uncategorized — Eric Coulter @ 1:36 pm

[Update 06/20/2024 04:58pm]

Dear Phoenix Users,

Summary: The Phoenix cluster is back online. The scheduler is unpaused and the jobs that have been put on hold are now resumed, and the file system is ready for use.

Details: All the appliance components for Phoenix project storage were restarted, and file system consistency was confirmed. We’ll continue to monitor it and run additional consistency checks over the next few days.

Impact: If you were running jobs on Phoenix and using project storage, please verify that your jobs have not run into any issues. We will be issuing refunds for all impacted jobs, so please reach out to pace-support@oit.gatech.edu if you have encountered any issues.

Thank you for your patience,

-The PACE Team

[Update 06/20/2024 01:36 pm]

Summary: The metadata servers on Phoenix, for project storage, /storage/coda1, are currently down due to degraded performance.

Details: During additional testing with the storage vendor as part of investigation of the performance issues from this morning, it was necessary to bring the storage fully offline, rather than resuming service.

Impact: We have paused the scheduler for now, so you will not be able to start jobs on Phoenix. We will release the scheduler once we have verified that project storage is stable. Access to project storage (/storage/coda1) is currently interrupted, however, scratch storage (/storage/scratch1) is not affected. If you were running jobs on Phoenix and using project storage, please verify that your jobs have not run into any issues. We will be issuing refunds for all impacted jobs as usual.

Only project storage on Phoenix is affected – storage on Hive, ICE, Buzzard and Firebird work without issues.

Thank you for your patience as we work with our storage vendor to resolve this outage. We will continue to provide updates as work continues.

Please contact us at pace-support@oit.gatech.edu with any questions.

Degraded Phoenix Project Storage Performance

Filed under: Uncategorized — Jeff Valdez @ 10:29 am

Summary: The metadata servers on Phoenix, /storage/coda1, restarted by themselves, with one of them not responding, leading to degraded performance on the project storage file system.

Details: We have restarted the servers in order to restore access. Testing performance of the file system is ongoing. We will continue to monitor performance and work with the vendor to find the cause.

Impact: We have paused the scheduler for now, so you will not be able to start jobs on Phoenix. We will release the scheduler soon once we have verified that storage is stable. Access to project storage (/storage/coda1) might have been interrupted for some users. If you are running jobs on Phoenix and using project storage, please verify that your jobs have not run into any issues. Only storage on Phoenix should be affected; storage on Hive, ICE, Buzzard and Firebird work without issues.

June 18, 2024

IDEaS Storage Outage Resolved

Filed under: Uncategorized — Michael Weiner @ 10:13 am

Summary: PACE’s IDEaS storage was unreachable early this morning. Access was restored at approximately 9:00 AM.

Details: One controller on the IDEaS IntelliFlash storage became unresponsive, and the resource could not switch to the redundant controller. Rebooting both controllers restored access. PACE is working with our storage vendor to identify the cause.

Impact: IDEaS storage could not be reached during the outage from PACE and external mounts. Any jobs on Phoenix or Hive running on IDEaS storage would have failed. If you had a job on Phoenix running on IDEaS storage that failed, please email pace-support@oit.gatech.edu to request a refund.

Thank you for your patience as we resolved the issue this morning. Please contact us at pace-support@oit.gatech.edu with any questions.

June 7, 2024

Hive Storage Maintenance

Filed under: Uncategorized — Jeff Valdez @ 4:21 pm

WHAT’S HAPPENING?

One of the storage controllers in use for Hive requires a hard drive replacement to restore the high availability of the device. The activity takes about 2 hours to complete. 

WHEN IS IT HAPPENING?

Tuesday, June 11th, 2024, starting at 10 AM EDT.

WHY IS IT HAPPENING?

The failed drive limits the high availability of the controller.

WHO IS AFFECTED?

Users of the Hive storage system will notice decreased performance since all services will be switched over to a single controller. It is possible that access will be interrupted while the switch happens. 

WHAT DO YOU NEED TO DO?

During hard drive replacement for the Hive cluster, one of the controllers will be shut down, and the redundant controller will take all the traffic. Data access should be preserved, and we do not expect downtime, but there have been cases in the past where storage has become inaccessible. In case of storage unavailability during replacement becomes an issue, your job may fail or run without making progress. If you have such a job, please cancel it and resubmit it once storage can be accessed.

WHO SHOULD YOU CONTACT FOR QUESTIONS?

For any questions, please contact PACE at pace-support@oit.gatech.edu.

Older Posts »

Powered by WordPress