PACE A Partnership for an Advanced Computing Environment

March 19, 2025

Phoenix storage performance degraded

Filed under: Uncategorized — Michael Weiner @ 5:01 pm

[Update 3/21/25 12:30 PM]

Following the completion of the rebuild and copyback processes on the impacted redundant storage pool, Phoenix project storage performance has returned to normal. Please contact pace-support@oit.gatech.edu if you encounter any further issues.

[Original post 3/19/25 5:00 PM]

Summary: Performance of Phoenix project storage is currently degraded.

Details: Multiple redundant disks failed yesterday and today, and storage is slowed while the redundant pool rebuilds.

Impact: Researchers may experience significant slowness in read & write performance on Phoenix project storage until the process is complete. Conda environments located in project storage may be very slow to load (even if the python script to run is located elsewhere) or fail to activate, while attempts to view project storage files via the OnDemand web portal may time out.

Please visit https://status.gatech.edu for updates and contact pace-support@oit.gatech.edu with any questions.

February 19, 2025

[Advance Notice] Planned Spring-Break (March 17-21st) Downtime

Filed under: Uncategorized — Eric Coulter @ 5:10 pm

[Update 3/6/25]

Summary: All PACE compute nodes will be unavailable from 4:00 PM on Friday, March 14, through Tuesday, March 18, to repair a water leak in the Coda Datacenter Research Hall cooling system. Access to login nodes and data will remain available.  

Details: Due to a water leak discovered last month, a seal will be replaced at the start of Spring Break in the cooling system. A full replacement of the pump is planned for the May maintenance period, which will be extended one day and is now planned for May 6-9, 2025, once all parts are available. Non-compute nodes in the Enterprise Hall will not be impacted in the Spring Break repair.  

Impact: During the outage, it will not be possible to run any compute jobs on any PACE cluster (Phoenix, Hive, ICE, Firebird, Buzzard). Login nodes and storage systems will remain available. A reservation has been placed on all schedulers to prevent any jobs from starting if their walltime request extends past 4:00 PM on March 14; these jobs will be held until maintenance is complete.  

Thank you for your patience as we work to restore full functionality of the cooling system. You can read this message on our blog.  

Best, 

-The PACE Team 

[Original Post 2/19/25]

Summary: A water leak has occurred in the CODA Datacenter Research Hall cooling system due to the failure of a pump seal – as a result, we are planning a two or three-day outage (pending confirmation from the Databank and mechanical contractor) during the week of March 17th-21st, which we hope will have less impact due to Spring Break. Access to login nodes and data will remain available, as these live in a different part of the datacenter. No compute services (Phoenix, Hive, ICE, Firebird, or Buzzard compute nodes) will be available. We will follow up once the exact days of the outage are finalized. 

Details: A pump seal in the CODA research hall cooling system failed on Feb 16th. The leak is not currently impacting operation of any PACE resources. Databank is working on a full pump replacement (“flange-to-flange”) plan to address the issue. Databank is actively sourcing the pump and associated parts and coordinating with a new mechanical contractor. We currently target the pump replacement to occur during Spring Break (March 17 – 21). However, this target date could change based on supply chain constraints. The mechanical work is estimated to take one to two days (depending on if additional damage or issues are identified during the pump replacement). Upon completion of the work, the PACE team will need one business day to conduct all necessary testing on the ~2,000 systems and release the five clusters currently hosted in the Research Hall (Phoenix, Hive, ICE/AI Makerspace, Firebird, and OSG Buzzard).  

Being able to perform the work during Spring Break represents a best-case scenario. Databank is actively monitoring the leak and the overall health of the cooling system. Should the situation deteriorate quickly or a catastrophic failure occur, Databank will coordinate emergency repair work to replace the pump seal itself using available on-site spare parts. Under this scenario, a complete pump replacement would be coordinated during the planned PACE Maintenance period in May.  
 
We are striving to keep the shutdown as short as possible. A reservation has been placed on the cluster to prevent any jobs being cancelled by the shutdown – which will cause some jobs to hold until the outage is over. 

Thank you for your patience as we work to recover from this situation

Best, 

-The PACE Team 

February 4, 2025

[Notice] Phoenix Scheduler Account Issue

Filed under: Uncategorized — Eric Coulter @ 4:55 pm

Following the Slurm upgrade during the January 2025 maintenance window, the monthly usage reset did not execute as scheduled on February 1. Consequently, reported balances were lower than expected, as the reported usage still included January utilization. Having identified the issue, the PACE team manually reset usage across all accounts at 12:00 PM on February 4. Additionally, the vendor has been notified of the bug to provide a patch before the next monthly cycle.  

Any jobs that ran from the beginning of February until now will not count towards February usage, and any overages of pre-set limits will be refunded.  

We sincerely apologize for the inconvenience.  

Thank you and have a great day! 

PACE team 

January 6, 2025

[Complete] PACE Maintenance Period – January 13th-16th 2025

Filed under: Uncategorized — Eric Coulter @ 1:34 pm

WHEN IS IT HAPPENING?  

PACE’s next Maintenance Period starts at 6:00AM on Monday January 13th, 01/13/2025, and is tentatively scheduled to conclude by 11:59PM on Thursday January 16th, 01/16/2025. The additional day is needed to accommodate additional testing needed due to the presence of both RHEL7 and RHEL9 versions of our systems as we migrate to the new Operating System. PACE will release each cluster (Phoenix, Hive, Firebird, ICE, and Buzzard) as soon as maintenance work and testing are completed. We will prioritize ICE to support Spring courses as soon as possible, and for the others, plan to focus on the largest portion of each system first (for Phoenix and Firebird where both OSs are present), to restore access to data and compute capabilities. 

 
WHAT DO YOU NEED TO DO?   

As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During this Maintenance Period, access to all the PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, ICE, and Buzzard. Please plan accordingly for the projected downtime. CEDAR storage will not be affected. 

WHAT IS HAPPENING?   

ITEMS REQUIRING USER ACTION: 

  • [Phoenix] Continue migrating nodes to the RHEL 9 operating system, which will complete post-MD – after this, Phoenix will be 75% on the RHEL9 OS. 
  • [Hive] COMPLETE migrating nodes to the RHEL 9 operating system. 
  • [Phoenix and Hive] Default login behavior will change so that login-phoenix and login-hive will point to RHEL 9 login nodes rather than RHEL 7 nodes, which WILL trigger SSH warnings. For more information on SSH at PACE, see our documentation
      

ITEMS NOT REQUIRING USER ACTION: 

  • [Phoenix, Hive, Firebird, ICE] Upgrade Slurm to 24.11.10 
  • [all] DataBank will perform cooling tower cleaning requiring all machines in the research hall to be powered off 
  • [all] Upgrade border firewall hardware  
  • [Phoenix,ICE] Upgrade IB (InfiniBand) switch firmware 
  • [Phoenix,Hive] Move Globus endpoints to new network to improve performance 
  • [ICE] Enable self-service container builds 
  • [Phoenix] Upgrade all storage servers to latest version to support performance improvements, covering scratch and project (coda1) storage 
  • [Firebird] Upgrades to underlying storage servers to improve functionality 

WHY IS IT HAPPENING?  

Regular maintenance periods are necessary to reduce unplanned downtime and maintain a secure and stable system.  

WHO IS AFFECTED?  

All users across all PACE clusters.  

WHO SHOULD YOU CONTACT FOR QUESTIONS?   

Please contact PACE at pace-support@oit.gatech.edu with questions or concerns.

Thank you,  

-The PACE Team 

December 16, 2024

[Resolved] Phoenix login nodes outage on Dec 5, 2024

Filed under: Uncategorized — Grigori Yourganov @ 2:26 pm

On the morning of December 5, 2024, the RHEL9 login nodes of the Phoenix cluster became unresponsive. The problems started at 4:37 AM, when one login node (out of two) had a memory problem; at 6:27 AM, it crashed. The other login node crashed at 9:37 AM, rendering the RHEL9 environment on Phoenix inaccessible. Both login nodes were restarted at 11:30 AM, which resolved the issue. The jobs that crashed between 4:37 and 11:30 AM have been refunded.

December 10, 2024

 Message about Storage Performance, Reliability, and Future Plans for Phoenix 

Filed under: Uncategorized — Eric Coulter @ 4:45 pm

Executive summary 

PACE recognizes that the increasing frequency of performance issues on the storage system is causing disruptions to your research on the Phoenix cluster. We are striving to mitigate the impact of these events while taking proactive measures to improve the reliability of our systems for the future. To this end, we are introducing new storage technology, and prioritizing migration of data from the existing project storage to the new system over the next year. We are currently working towards finalizing a seamless migration plan. Once the plan is ready, around late spring, we will follow up with detailed information regarding the timeline and any potential workflow impacts. Our goal is to minimize disruption and ensure that everyone is aligned on key milestones. There will be no changes to the unit price until the new system is fully implemented, data migration is complete, the existing system is reconfigured, and we have sufficient data usage metrics to determine any necessary price adjustments. We estimate this will be no earlier than the end of 2025. We believe this to be the fastest path towards a stable and effective storage solution that can cater to the varied storage needs of our user community. Please find more details below. 

Phoenix cluster at PACE hosts two major storage systems – scratch and project. Scratch is the temporary file system for storing files used during job execution — it is cleaned up once a month by including files older than 60days for deletion. Project is a long-term file system – it has 2.3pB of data and about 1.8B files on it. Besides environmental factors such as chilled water outage, these two storage systems have been significant contributors to downtime and degraded performance of the Phoenix cluster. Based on our analysis, so far for the calendar year 2024, storage failure or severe degradation accounts for 47% of unplanned downtime. Addressing this concern has been our primary focus over the past six months. This message aims to share our progress and plans for the next 12 months in this regard.  

Scratch Space 

The Scratch space is supported by a DDN 400NVX2 unit, which includes a mix of flash drives (NVMe) and spinning disks.  The system was offlined during August maintenance for a major software upgrade performed by the vendor to improve its stability.  Furthermore, software issues/instability on this unit has required us to turn off the use of flash drives for hot pools that has impacted performance. To address this, a second software upgrade is planned for January 2025 during our scheduled maintenance period, at which point we will configure the flash space to use Progressive File Layout (PFL), keeping small files in flash and progressively increasing the number of stripes across all devices for improved performance. 

Project Space 

Project space is supported by a DDN18K system purchased in 2020. This unit primarily uses spinning disks and is nearing its end of life. Over the past two years, we have been experiencing an increase in issues related to software defects and disk failures. While the system has built-in redundancies to support multiple disk failures, the disk rebuild process coupled with an increasing number of disk intensive research jobs on Phoenix are negatively impacting the performance. 

We have adopted a two-pronged strategy to address the project storage:  

  1. Perform software and limited hardware upgrades. During the August maintenance period, the hardware subsystem supporting the metadata functionality of the Lustre filesystem was replaced with a new dedicated unit. Due to its complexity, this operation required an additional day of work. The objectives of this upgrade were to a) improve the performance of the metadata functionality and b) provide an upgrade path for the future software releases. During the January 2025 maintenance period, the vendor will be able to perform a major software upgrade, to standardize Lustre 12.4 on all the storage appliances. This will increase the stability and performance of the project storage while simplifying its management.  
  1. Consult with other research computing sites (including the Texas Advanced Computing Center) to invest in a new storage system to replace or complement our DDN 18K unit. 

Based on feedback from other research computing sites, we made an initial investment in an all-flash storage system from VAST Data. Being an all-flash storage with disaggregated architecture we expect significantly better uptime and performance on this new system. Particularly, VAST system does not require downtime to perform major code upgrades. The vendor has installed the system, and we are in the process of bringing the unit into production to host data, initially in support of improvements to the DDN18K.  

Our plan over the next 12 months includes: 

  1. Migrate all data from the current DDN18K project space to the new VAST storage space to support these improvements. Due to the storage outages and performance issues affecting our user community, we want to clarify the following during the migration process over the next 12 months, or as long as necessary to complete the DDN18K improvements: 
  • All storage credit balances, including the free tier, will be equivalent to the existing system. 
  • The unit rate of $5.67 per TB per month for the paid tier will remain the same for both the current project space and the new VAST Storage system. 
  1. Work with our vendor to reconfigure the DDN18K appliance to efficiently use all available SSD / NVMe drive space and recreate storage pools to remove performance bottlenecks during disk failure. This will require a complete reformatting of the storage space. 
  1. Gather metrics on storage efficiencies (e.g., capacity reduction after data de-duplication and compression) and operational efficiencies so we can more accurately calculate the rate for VAST Storage. 
  1. Leverage the VAST Data analytics to help users archive older data to lower-cost storage options such as CEDAR.  
  1. Publish comparative analysis about functionality, performance, and resiliency between the different storage services provided by PACE to help users make decisions on what storage service(s) to use based on their type of research and data. 

In the long term, we expect the new VAST Data storage system to be offered as a separate service with its own storage rate. This rate will likely be higher than the current $5.67 per TB per month for the DDN18K system. However, we cannot determine the final rate until we better understand how our data usage and efficiencies are managed by the new system. 

Storage credits that have already been purchased, or are purchased during this transition, will retain their value on the DDN18K system (in terabyte months). Alternatively, they can be converted to storage credits on the VAST system at a ratio to be determined once the transition is complete. At that point, users will have the option to stay on VAST or move back to the reconfigured DDN18K, or choose a different, potentially cheaper storage option. 

Our goal is to enhance the reliability of the storage system in PACE while introducing new technologies to meet the diverse needs and budgets of the Georgia Tech Research community. Our aim is to develop migration strategies that minimize disruptions to your workflows and offer more storage options tailored to your requirements. To this end, we are cross-evaluating bulk versus individual group migrations, and we will be engaging with different research groups as necessary. We are committed to providing regular updates during this process. 

Thank you for your understanding and support during this project. If you have any questions or concerns, please feel free to reach out to us at pace-support@oit.gatech.edu. 

October 31, 2024

[Resolved] Firebird ASDL Outage

Filed under: Uncategorized — Grigori Yourganov @ 4:58 pm

On Oct 30, 2024, at 9:20 PM, there was a drive failure on the Firebird ASDL servers (on the ZFS pool dedicated to the ASDL project). The ASDL login nodes were offlined. Several jobs failed, and no new jobs were accepted since 10:09 AM on Oct 31. The NFS server was restarted and tested, and the ASDL nodes were back online at 12:38 PM on Oct 31.

New GPUs for Phoenix, V100s being Replaced 

Filed under: Uncategorized — Michael Weiner @ 9:53 am

[Additional Message 11/7/24]

As we prepare to remove 12 of the V100 servers from Phoenix next week in preparation for the arrival of new GPU nodes in December, we would like to inform you of another set of new GPUs available on the cluster through the embers backfill QOS.

There are 8 nodes, each with 8 L40S GPUs, providing 64 GPUs that have been available exclusively on embers (due to the ownership of this equipment) since late September in the Phoenix RHEL9 environment.

Visit our Phoenix Slurm guide on GPU requests to learn how to request them. Be sure to include a request for the embers QOS when requesting L40S architecture, at least until the additional L40S nodes for general use become available in December on inferno. You must make the request from the RHEL9 environment. Access via Phoenix OnDemand is not yet available.

Please contact pace-support@oit.gatech.edu with any questions.

[Original Post 10/31/24]

We’re happy to announce that there are will be 6 new H200 machines coming to Phoenix for general usage, with 8x NVIDIA H200 GPUs each, along with 2x L40S machines, each with 8x NVIDIA L40S GPUs. These will be available on the RHEL 9 operating system on Phoenix, which is required to support the new hardware. 

12 of the existing V100 servers will be REMOVED from the Phoenix RHEL7 environment to make room for the new L40S hardware, due to having reached the end-of-life on vendor support. The overall impact will be to greatly increase both the number and power of GPUs available on Phoenix – 24 V100 GPUs will be replaced with 16 L40S and 48 H200 GPUs. 
 
This change will begin on Nov. 11th, when the V100 machines will be removed, and we will 
begin installing the new servers, which we hope to release by December 6th
 
The new machines will be available via both the Inferno QoS and Embers on RHEL9. Jobs using the new H200 machines will be charged at a rate of $0.673 per GPU Hour ($1.4571 for GTRI), matching the current H100 rate. The rate for the new L40S GPUs will be shared prior to their release, as we’re working through approvals. 

October 24, 2024

Phoenix Project storage Slowness

Filed under: Uncategorized — Eric Coulter @ 10:57 am

WHAT’S HAPPENING? 

Multiple hard disks failed in a single RAID pool making up the filesystem underlying Phoenix Project storage. As the arrays are being rebuilt to ensure continued resilience against disk failures, read/write performance on the device may be somewhat slower. 
 
In addition to this, as part of a mitigation for a previous storage issue on 9/30, we have temporarily re-configured our storage to rely fully on spinning disk rather than caching parts of files on solid-state drives, which will cause a general decrease in access speeds until we are able to transition back to the prior configuration.  

WHEN IS IT HAPPENING? 
The failed drives were replaced on Oct 23rd, the pool rebuild will continue automatically. We will update when the process is complete. 

WHY IS IT HAPPENING? 

Hard disk failures are a regular part of life; the devices we support are capable of weathering these without data loss, however, it is necessary to re-write striped data onto replacement disks, leading to slight performance slowdown. In this case, 4/64 disks failed in one of the several pools making up the coda1 filesystem. We have configured the system to avoid writing new files to that pool in the meantime. These particular disks were in service for over 5 years before failing.  
 
We also had to disable our use of the Lustre Progressive File Layout (PFL) option on this device, which splits files between solid-state and spinning disk to provide faster access, due to the fact that the solid state drive pool became completely full on 9/30, causing a temporary outage. We are working to migrate data from the solid-state pool to spinning disk, but this process takes time and depends on the underlying drive pools being fully rebuilt, among other things.  

WHO IS AFFECTED? 

Phoenix users may experience slower performance of Phoenix Project storage during the rebuild, and additionally until we are able to re-enable PFL. 

WHAT DO YOU NEED TO DO? 

Please bear with us and keep an eye out for updates. 

WHO SHOULD YOU CONTACT FOR QUESTIONS? 

For any questions, please contact PACE at pace-support@oit.gatech.edu.

October 23, 2024

Message concerning September-October 2024 Datacenter Outages

Filed under: Uncategorized — Eric Coulter @ 4:00 pm

Dear PACE Community,

Due to a highly unusual series of data center related outages this Fall, we would like to share details about the sequence of events and causes which have impacted the availability of PACE resources, and the PACE team’s continued work to provide a stable research computing environment to the GT community. We fully understand the significant negative impact these outages have on the research community, including the inability to submit research papers, complete deadlines, and as well as the loss of research time. While this does not make up for the full impact of these outages, we always work to ensure that no paid accounts are charged for computational jobs that fail due to outages, and have temporarily doubled free-tier account credits for October 2024 in a small effort to alleviate the pain of lost time. 

While many of the details here were communicated in the moment, a unified picture may help clear up certain misconceptions, and unfortunately prompt communication was required before a full understanding of the situation could be gained.   

Background: The CODA datacenter is the sole hosting facility for PACE resources. The datacenter is owned and operated by Databank. PACE resources are spread across two datacenter areas: 

  • The Enterprise Hall (500kW power provisioned) which has N+1 redundant cooling, networking, and power (battery-based UPS + Generator), where PACE and OIT host critical infrastructure and storage systems. This enables us to maintain access to login nodes and storage during most system and service outages impacting the datacenter. 
  • The Research Hall (2MW), which was designed without redundant cooling, and relies on a combination of flywheel UPS (<1minute runtime) + Georgia Power Microgrid in the case of an electrical utility outage (https://research.gatech.edu/georgia-tech-celebrates-opening-new-energy-project-midtown-atlanta)This design choice allowed for significantly more research compute capacity, performance, and greatly reduced facilities and operational costs. The design and operational model included elements to minimize single points of failure and to support faster recovery times. 

For the calendar year 2024, the following power and cooling datacenter outages have impacted PACE services: 

  • 9/3/2024: On August 27th, Databank identified a failed chilled water flow sensor on the High-Temp Chiller loop providing cooling to the research hall. Databank requested downtime before the next PACE maintenance period (January 2025) for emergency replacement.  
  • 9/8/2024: On September 8th, the High-Temp Chiller system providing cooling to the research hall failed due to the condenser pump variable-frequency drive (VFD) failing. Due to supply chain constraints, a unit was not available as part of the on-site inventory and  different brand/model VFD had to be sourced and installed. During the repair, Databank identified that the VFD failure had damaged the condenser pump internal bearing. The condenser pump was replaced with the on-site spare. 
  • 10/1/2024: On October 1st, the data center experienced a short loss of utility power, which impacted the High-Temperature Chiller system providing cooling to the research hall. The new condenser pump variable frequency drive was unable to properly auto-reset because of a previously unknown parameter. Note: during this incident, PACE only shut off idle nodes and prevented new jobs from being launched. No running jobs were impacted. 
  • 10/2/2024: On October 2nd, at approximately 11:33am, the datacenter experienced a rapid sequence of utility power loss (8 events in less than two minutes). The Research Hall electrical load was transferred to the UPS/Flywheels for backup power. However, the load was unable to be transferred back to the microgrid as intended due to a network breaker that tripped in the electrical vault during the October 1st event. Only Georgia Power can reset this breaker. As a result, power was lost entirely to the Research Hall once the flywheels were depleted. 

What are we doing to prevent these failures in the future?  

OIT, in partnership with the GT Real Estate Office, has engaged Databank to review outages over the past few years. Specifically, we are: 

  • Evaluating the 2017-2018 datacenter design requirements for the research hall, and how these requirements align with the needs for a reliable research computing infrastructure. 
  • Reviewing, evaluating, and improving operational procedures between DataBank, Georgia Power, and Georgia Tech. 
  • Reviewing and evaluating the list of critical spare parts maintained on-site by DataBank. 
  • Engaging stakeholders to review reliability and resilience requirements for research computing. 
  • Explore potential options to improve the cooling and power redundancy of the research hall. 
  • Analyzing the feasibility and pros and cons of hosting a small portion of the PACE computational capabilities in the high-availability enterprise side or leveraging cloud resources during outages. 

Long term, we plan to explore the use of additional datacenter locations to host research computing resources. 

Please feel free to reach out with any questions or concerns,

Didier Contis
Executive Director of Academic Technology, Innovation, Research Computing for the Office of Information Technology

Older Posts »

Powered by WordPress