GT Home : : Campus Maps : : GT Directory

Archive for category Uncategorized

FAQs after user migration to the Phoenix cluster in CODA

Posted by on Tuesday, 17 November, 2020

Dear PACE research community,

After we completed our second wave of user migration last week, we received some common questions from users in reference to the new cost model that was announced on September 29 and about the new cluster, Phoenix, in general, which we will address below for the benefit of the community:

  • The Phoenix scheduler has been redesigned.   Unlike previous PACE-managed clusters, there are only two queues on the Phoenix cluster: inferno and embers.  To submit a job, you will need to specify a charge account (i.e., MAM account) that was/will be provided to you in the “welcome email” after migration to the Phoenix cluster in Coda.  You may have access to multiple MAM accounts, for example, a PI and their user group may have access to an Institute sponsored account (GT-gburdell3  – $68/mo), account for refreshed PI cluster (e.g., GT-gburdell3-CODA20 -> $43,011.32), or account for recent FY20 purchase (e.g., GT-gburdell3-FY20Phase2 -> $17,860.75).  For further details on submitting jobs on the Phoenix cluster, please refer to the documentation at  http://docs.pace.gatech.edu/phoenix_cluster/submit_jobs_phnx/ .
  • Access to departmental PACE resources (e.g. CoC, CEE, biology,…) are restructured based on departmental preferences. As with the rest of PACE, access is now managed at a group level, each owned by a specific PI, although the distribution of available departmental credits may vary from one department to another.
  • We are in a process of providing PIs further details with regards to their cluster(s) from Rich datacenter that were refreshed and converted into credits/MAM account according to the new cost model.  Additionally, PIs who participated in the FY20 purchases will receive further details about the conversion from purchased equipment to the credits/MAM account.
  • As mentioned in our initial announcement on September 29, users will not be charged for their usage of compute resources until at least January 1, 2021.   Until that time, all jobs that run on Phoenix are free as we work to migrate all users into the cluster and for the users to get familiar with the new environment.  Please note that your credits will be declined, but we will reset your total before we start billing.
  • All of your data has been migrated to Phoenix, but the structure has changed. Note that the data is now in your project storage under a different directory name, and the symbolic links to different locations have been broken due to this.  Please visit our documentation for information on locating your group’s shared directory and on recreating symbolic links as documented at http://docs.pace.gatech.edu/phoenix_cluster/where_is_my_rich_data/ .  For further details, please refer to the documentation at http://docs.pace.gatech.edu/phoenix_cluster/storage_phnx/ .
  • pace-vnc-job command is functional, however, you will need to setup VNC for the Phoenix cluster.  To setup VNC, remove ~/.vnc directory, then run vncpasswd to set the new password for VNC on the Phoenix cluster.  After this, you will be able to submit pace-vnc-job with the additional MAM account that you will need to pass to the command.

If you have any questions, concerns or comments about your recent migration to Phoenix, upcoming migration or the new cost model, please direct them to pace-support@oit.gatech.edu.

Best,

The PACE Team

[RESOLVED] PACE-archive storage – scheduled migration – November 17

Posted by on Friday, 13 November, 2020

[Update – November 18, 10:08am] 

We are following up to inform you that the migration of pace-archive storage from Rich to BCDC datacenter has completed.  The service is fully operational.  You may now access your archived data via Globus PACE Phoenix endpoint if you have migrated to the Phoenix cluster, or PACE Internal endpoint if you are in Rich datacenter.

If you have any questions, please don’t hesitate to contact us at pace-support@oit.gatech.edu .

Thank you for your patience during this brief outage while migrated pace-archive.

 

[Update – November 17, 7:01am] 

At this time, the migration of pace-archive storage has started.  During the migration, you will not have access to the pace-archive. This migration is anticipated to last 1 day.  We will keep you posted on the progress of the archive storage migration, and you may check our blog post for further updates: http://blog.pace.gatech.edu/?p=6990

If you have any questions, please don’t hesitate to contact us at pace-support@oit.gatech.edu.

Thank you for your attention to this notice.

 

[Update – November 16, 8:08pm] 

Dear PACE Users,

This is reminder that the migration of pace-archive storage will begin tomorrow as scheduled.   This migration is anticipated to last 1 day.  Please note, during the archive storage migration from Rich to BCDC, you will not have access to pace-archive.  Please make necessary arrangements in accessing your data prior to this scheduled outage so that the impact to your research is minimized.

What is happening:  Tomorrow, PACE users, will not be able to access pace-archive storage during the scheduled migration of the storage servers from Rich to BCDC datacenter.  PACE team is planning to restore access to archive storage by November 18, 2020.   During this outage, users will not be able to access their data, for example, use Globus pace-internal endpoint to access, retrieve, or upload their data from/into pace-archive.

Who does this message impact and what should you do: This outage impacts all PACE users who have access to the pace-archive storage.  Please  use this notice to plan accordingly in accessing your data around this scheduled outage so that the impact to your research is minimal.

What will PACE do: We will keep the users updated on the progress of the archive storage migration, and you may check back this blog post for further updates.

If you have any questions, please don’t hesitate to contact us at pace-support@oit.gatech.edu.

Thank you for your attention to this notice.

 

[Original Post – November 13, 7:20pm]

Dear PACE Users,

We are reaching out to inform you about the upcoming migration of pace-archive storage servers that’s scheduled for November 17.  The migration is anticipated to last 1 day.  During the archive storage migration from Rich to BCDC datacenter, you will not have access to pace-archive.  Please make necessary arrangements in accessing your data prior to this scheduled outage so that the impact to your research is minimized.

What is happening:  On November 17, 2020, PACE users, will not be able to access pace-archive storage during the scheduled migration of the storage servers from Rich to BCDC datacenter.  PACE team is planning to restore access to archive storage by November 18, 2020.   During this outage, users will not be able to access their data, for example, use Globus pace-internal endpoint to access, retrieve, or upload their data from/into pace-archive.

Who does this message impact and what should you do: This outage impacts all PACE users who have access to the pace-archive storage.  Please  use this notice to plan accordingly in accessing your data around this scheduled outage so that the impact to your research is minimal.

What will PACE do: We will keep the users updated on the progress of the archive storage migration, and you may check back this blog post for further updates.

If you have any questions, please don’t hesitate to contact us at pace-support@oit.gatech.edu.

Thank you for your attention to this notice.

December 1, 2020 – PACE Users will have Access to Rich Datacenter Disabled (This does not apply to users accessing CUI resources in Rich)

Posted by on Monday, 9 November, 2020

Dear PACE Users,

In the past couple months, we have reached out to research groups with regards to the required user migrations from Rich to CODA datacenter.  At this time we are actively migrating users into CODA, and we have another migration of research groups scheduled for December 1st.  In an abundance of caution, if you have not received an email about your migration to CODA datacenter, please contact PACE about your migration at your earliest convenience.

What is happening:  On December 1, the remaining PACE users (non-CUI) in the Rich datacenter will have their access disabled as part of the last migration to CODA datacenter that starts on December 1.  Please note, this does not apply to CUI resources and their user migrations at this time.

Who does this message impact, and what should I do:  If you are NOT already migrated to CODA, in the process of migrating to CODA, or received an email from PACE research scientist about your planned migration to CODA, then please contact pace-support@oit.gatech.edu so that we may address your migration and prevent interruption to your research as we disable access to Rich datacenter.

This message is being sent out of abundance of caution to ensure that no user is left behind in Rich datacenter as we disable access to all non-CUI resources in Rich datacenter on December 1, 2020.   If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

Best,

The PACE Team

 

 

 

[Resolved] Phoenix Storage (Lustre) slowness that’s impacting data and scratch

Posted by on Monday, 2 November, 2020

[Update – 11/03/2020 – 11:01am]

As of late last night, the slowness experienced on Phoenix storage was resolved.   Thank you for your patience and understanding while we worked to address this issue.

What is happening and what we have done:   In response to reports from users about slowness in accessing files on Phoenix’s Lustre storage, PACE team was able to replicate this issue during our investigation, and through our troubleshooting that included Lustre Metadata Service (MDS) reboots, we were able to resolve the slowness.   The Phoenix Lustre storage is stable at this time, and there was no loss of user data during this incident.   

What we will continue to do: PACE will continue to monitor the Phoenix storage out of abundance of caution, and we will update as needed.

Again, this issue did not impact any of the other resources in Coda and Rich Datacenter.

Thank you for your attention to this message, and we apologize for this inconvenience.

 

[Original Post – 11/02/2020 – 1:03pm]

Dear PACE Users,

PACE is aware of the slowness experienced on Phoenix’s storage.  At this time, PACE is able to replicate the issue, and we are investigating the root cause of the storage issue.

What is happening and what we have done:   We’ve received couple reports from users about slowness in accessing files from ‘data’ and ‘scratch’ directories on Phoenix’s Lustre storage.  Some users are experiencing slowness in accessing their files, and running commands such as ‘ls’ or opening a file with ‘vim’ may be very slow.  During our investigation, PACE team is able to replicate this issue, and we are investigating the root cause of the slowness with storage.   

What we will continue to do: This is an active situation, and we will follow up with updates as they become available.

This issue does not impact any of the other resources in Coda and Rich Datacenter.

Thank you for your attention to this message, and we apologize for this inconvenience.

The PACE Team

Hive Scratch Storage Update

Posted by on Tuesday, 27 October, 2020

We would like to remind you about scratch storage policy on Hive. Scratch is designed for temporary storage and is never backed up. Each week, files not modified for more than 60 days are automatically deleted from your scratch directory. As part of Hive’s start-up, regular cleanup of scratch has now been implemented. Each week, users with files set to be deleted receive a warning email about files to be deleted in the coming week, with additional information included. Those of you who use the main PACE system are already familiar with this workflow.

Some of you received such an email yesterday. As always, if you need additional time to migrate valuable data off of scratch, please respond to the email as directed to request a delay.

Please contact us at pace-support@oit.gatech.edu with any questions about how to manage your data stored on Hive.

CoE HPC Cost Model Listening Session

Posted by on Monday, 26 October, 2020

Over the past few months, a team from the EVPR, OIT, EVP-A&F, and GTRC has been working with Institute leadership to develop a more sustainable and flexible way to support research cyberinfrastructure. This new model is described in more detail below and will affect researchers who leverage PACE services. The model enjoys strong support, but it is not yet fully approved.  We are communicating at this stage because we wanted you to be aware of the upcoming changes and we welcome your feedback. Please submit comments to the PACE Team <pace-support@oit.gatech.edu> or to Lew Lefton <lew.lefton@gatech.edu>. This listening session is organized for the College of Engineering.

Date:           11/02/2020, 4:00pm – 5:00pm

Location:   BlueJeans (link provided via email)

Host:           EVPR/PACE

In a nutshell, PACE will transition from a service that purchases nodes with equipment funds, to a service which operates as a Cost Center. This means that major research cyberinfrastructure (including compute and storage services) will be treated like other core facilities. This new model will begin as the transition to the new equipment in the CODA data center happens. We recognize that this represents a shift in how we think about research computing. But, as shown below, the data indicates that the long-term benefits are worth the change.  When researchers only pay for actual consumption – similar to commercial cloud offerings from AWS, Azure, and GCP – there are several advantages:

  • Researchers have more flexibility to leverage new hardware releases instead of being restricted to hardware purchased at a specific point in time.
  • The PACE team can use capacity and usage planning to make compute cycles available to faculty in days or week as opposed to having to wait for months due to procurement bottlenecks.
  • We have secured an Indirect Cost Waiver on both PACE services and commercial cloud offerings for two years to allow us to collect data on the model and see how it is working.
  • Note that a similar consumption model has been used successfully at other institutions such as Univ. Washington and UCSD, and this approach is also being developed by key sponsors (e.g. NSF’s cloudbank.org).
  • A free tier that provides any PI the equivalent of 10,000 CPU-hours on a 192GB compute node and 1 TB of project storage at no cost.

For further details on the new cost model, please visit out Web page

CoS HPC cost model listening session

Posted by on Tuesday, 13 October, 2020

Over the past few months, a team from the EVPR, OIT, EVP-A&F, and GTRC has been working with Institute leadership to develop a more sustainable and flexible way to support research cyberinfrastructure. This new model is described in more detail below and will affect researchers who leverage PACE services. The model enjoys strong support, but it is not yet fully approved.  We are communicating at this stage because we wanted you to be aware of the upcoming changes and we welcome your feedback. Please submit comments to the PACE Team <pace-support@oit.gatech.edu> or to Lew Lefton <lew.lefton@gatech.edu>. This listening session is organized for the College of Sciences.

Date:           10/13/2020, 10:00am – 11:00am

Location:   BlueJeans (link provided via email)

Host:           EVPR/PACE

In a nutshell, PACE will transition from a service that purchases nodes with equipment funds, to a service which operates as a Cost Center. This means that major research cyberinfrastructure (including compute and storage services) will be treated like other core facilities. This new model will begin as the transition to the new equipment in the CODA data center happens. We recognize that this represents a shift in how we think about research computing. But, as shown below, the data indicates that the long-term benefits are worth the change.  When researchers only pay for actual consumption – similar to commercial cloud offerings from AWS, Azure, and GCP – there are several advantages:

  • Researchers have more flexibility to leverage new hardware releases instead of being restricted to hardware purchased at a specific point in time.
  • The PACE team can use capacity and usage planning to make compute cycles available to faculty in days or week as opposed to having to wait for months due to procurement bottlenecks.
  • We have secured an Indirect Cost Waiver on both PACE services and commercial cloud offerings for two years to allow us to collect data on the model and see how it is working.
  • Note that a similar consumption model has been used successfully at other institutions such as Univ. Washington and UCSD, and this approach is also being developed by key sponsors (e.g. NSF’s cloudbank.org).
  • A free tier that provides any PI the equivalent of 10,000 CPU-hours on a 192GB compute node and 1 TB of project storage at no cost.

For further details on the new cost model, please visit out Web page

[Resolved] Power Outage at Rich Datacenter

Posted by on Tuesday, 6 October, 2020
[Update – 10/07/2020 – 8:02]
After nearly-28 hours since the initial power outage in the Rich Datacenter that further caused complications and failures with the networks and systems, we are pleased to report that we have restored the PACE resources in Rich Datacenter and released the user jobs.   We understand the impact this has had on your research, and we are very grateful for your patience and understanding as we worked through this emergency.  During this outage, the PACE clusters in the Coda datacenter (Hive, Testflight-Coda, CoC-ICE, PACE-ICE, and Phoenix) have not been impacted.
What we have done:  Since last night after the network repairs were conducted, we were closely monitoring the network/fabric, and we have gradually brought the infrastructure back up.  We conducted application and fabric testing across the systems to assure the systems are operational, and we addressed problematic nodes and issues with schedulers.  The power and fabric are stable. We have identified the users whose jobs were interrupted by this power outage from yesterday, and we will reach out to impacted users directly.  We have released user jobs that were queued prior to the power outage when we paused the schedulers, and jobs are currently running.
What we will continue to do: PACE team will continue to monitor the systems, and we will report as needed.      We have some straggling nodes that will remain offline, and we will work to bring them back up in the coming days.
Please don’t hesitate to contact us at pace-support@oit.gatech.edu if you have any questions or if you encounter any issues on the clusters.  Thank you again for your patience.
[Update – 10/06/2020 – 11:20]

We are following up to update you on the current status of the Rich Datacenter.    After a tireless evening, the PACE team in collaboration with OIT have successfully restored the network at approximately 11:00pm.  We replaced a failed management module on the core InfiniBand switch, now, the switch is operational.  Preliminary spot checks indicate that the fabric is stable.   In abundance of caution, we will monitor the network overnight.  In the morning, we aim to conduct additional testing and online the compute resources in Rich Datacenter, followed by releasing user jobs that are currently paused.    The power remains stable after the repairs were conducted, and the UPS is back at nearly full charge.

As always, thank you for your patience and understanding during this outage as we know how critical these resources are to your research.   

If you have any questions or concerns, please do not hesitate to contact us at pace-support@oit.gatech.edu.

[Update – 10/06/2020 – 6:30]

This is brief update on the current power outage.   Power has been restored in Rich datacenter, and recovery is underway.  Some Ethernet network switches have failed, and replacements and re-configurations are underway to try and restore services.  Currently, our core InfiniBand switch has not restarted yet.  We will continue to update you as we have more information.  For up to date information, please check the status and blog pages:

Again, this emergency work does not impact any of the resources in CODA datacenter.

Thank you for your continued patience and understanding as we work through this emergency.

[Original Post – 10/06/2020 – 4:54] 
We have a power outage on a section of campus that includes the Rich datacenter’s 133 computer room.  We are urgently shutting down the schedulers and remaining servers in Rich133.  Power to storage and login nodes in Rich are currently on generator power and will remain safe.
What is happening and what we have done:  At 3:45pm the campus (not GA Power) distribution power issued a power outage, and at 4:05 Rich 133 UPS went out.  Power to the chillers and to 2/3 of the computer room in Rich Datacenter is out. .  Facilities is on site and investigating the situation, also, High Voltage contractor is in route. We have initiated urgent shutdown of schedulers and remaining servers in the Rich datacenter’s 133 computer room.   Storage and login nodes are running on generators, but most of the running user jobs will have been interrupted by this power outage.

What we will continue to do: This is an active situation, and we will follow up with updates as they become available, and for most up to date information, please check the status and blog pages:

This emergency work does not impact any of the resources in CODA datacenter.

Thank you for your attention to this urgent message, and we apologize for this inconvenience.

The PACE Team

[RESOLVED] URGENT – CODA datacenter research hall emergency shutdown

Posted by on Monday, 5 October, 2020

[Update – 10/05/2020 8:18]

Thank you for your patience as we worked through this emergency to restore cooling in the CODA datacenter’s Research Hall.  At this time, we have Hive, COC-ICE, PACE-ICE, Testflight-CODA and Phoenix clusters back online with users’ previously queued jobs having started.

What has happened and what we did:   At 4:30pm today, the main chiller for the research computing failed fully in CODA datacenter’s Research Hall side.  PACE had urgently shutdown the compute nodes for the Hive, COC-ICE, PACE-ICE, Testflight-CODA and Phoenix clusters.  Storage and login nodes were not impacted during this outage.  Working with DataBank, we were able to restore enough cooling  using economizer module that can handle all cooling in the Research Hall.   At 6:30pm, we had onlined Hive cluster, and since then we have continued to bring back up the remaining cluster’s compute nodes for COC-ICE, PACE-ICE, Testflight-CODA, and Phoenix clusters while maintaining normal operating temperatures.   At about 7:00pm vendor has arrived, and is working on chiller, and no interruption should occur when the fixed chiller is brought online.  Our storage did not experience data loss, but users’ running jobs were interrupted by this emergency shutdown.  We encourage users to check on their jobs and resubmit any jobs that may have been interrupted.  Currently, previously queued user jobs are running on the clusters.

What we will continue to do:   PACE team continue to monitor the situation, and report accordingly as needed.

For your reference we are including OIT’s status page link and blog post:

Status page:  https://status.gatech.edu/pages/incident/5be9af0e5638b904c2030699/5f7b9062cb294e04bbe8cbda

Blog post: http://blog.pace.gatech.edu/?p=6931

If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

Thank you for your patience and attention to this emergency.

 

[Original Post – 10/05/2020 6:16]

The cooling has failed in CODA datacenter’s research hall.  We have initiated and completed emergency shutdown of all resources in CODA research hall that includes: Hive, COC-ICE, PACE-ICE, Testflight-CODA, and the Phoenix clusters.

What is happening and what we have done:   We have urgently  completed emergency shutdown of all the clusters in CODA datacenter.  Research data and cluster headnodes are fine, but all running user jobs will have been interrupted by this outage.  At this time, we are using economizer module to provide some cooling, and we are beginning to bring back up Hive cluster while closely monitoring the temperatures.

What we will continue to do: This is an active situation, and we will follow up with updates as they become available.

Also, please follow the updates on the OIT’s status page: https://status.gatech.edu/pages/incident/5be9af0e5638b904c2030699/5f7b9062cb294e04bbe8cbda

Additionally, we are tracking the updates in our blog at: http://blog.pace.gatech.edu/?p=6931

This emergency work does not impact any of the resources in Rich datacenter.

Thank you for your attention to this urgent message.

 

 

[Resolved] TestFlight-Coda, COC-ICE, and PACE-ICE Schedulers Down

Posted by on Saturday, 3 October, 2020

[Update – 10/05/2020 – 10:20am]

PACE has completed testing across the resources in Coda datacenter over the weekend.  These tests did not impact Hive cluster or PACE resources in Rich datacenter.  We brought the schedulers for coc-ice, pace-ice, and Testflight-coda clusters online.  Users queued jobs have resumed.

If you have any questions, please don’t hesitate to contact us at pace-support@oit.gatech.edu.

Thank you again for your patience during this testing.

[Original Post – 10/03/2020 – 1:43pm]

In order to complete preparations for bringing PACE’s new Coda resources on the research cluster into production, we had to urgently offline testflight-coda, coc-ice, and pace-ice schedulers on Saturday at about 10:30am, which is in effect until 8 AM on Monday. We did have a job reservation in place to prevent interruptions to any user jobs, and at the time of taking testflight-coda, coc-ice, and pace-ice schedulers offline, there were no users running jobs on the system.  We apologize for this inconvenience. You can still access the login node over the weekend, but you will receive an error message if you attempt to submit a job.  Your files are all accessible via the login node.  All queued jobs prior to the offlining of the schedulers will resume on Monday.

Hive and all PACE resources in the Rich datacenter are not affected.

Again, we apologize for the late notice. Please contact us at pace-support@oit.gatech.edu with questions.