GT Home : : Campus Maps : : GT Directory

[Reopened] Network (Infiniband Subnet Manager) Issues in Rich

This entry was posted by on Friday, 14 August, 2020 at

[ Update 8/14/20 7:00 PM ]

After an additional nearly-48-hour outage in the Rich datacenter due network/InfiniBand issues, we have brought back up PACE resources on the affected systems and released user jobs. We thank you for your patience and understanding during this unprecedented outage, as we understand the significant impact that this outage has continued to have on your research throughout this week. Please note that PACE clusters in the Coda datacenter (Hive and testflight-coda) and CUI clusters in Rich have not been impacted.

While new jobs have not begun over the past two days, already-running jobs have continued. Please check the output of any jobs that are still running. If they are failing or not producing output, please cancel them and resubmit to run again. Some running user jobs were killed in the process of repairing the network, and those should also be resubmitted to the queue.

In addition to previously reported repairs, we removed a problematic spine module from a network switch this morning and further adjusted connections. This module appeared to be causing intermittent failures when under heavy load.

Currently, our network is running at reduced capacity. We have ordered a replacement switch spine module that will be used to replace the removed part. We have conducted extensive stress testing of the network and storage today, which were far beyond tests conducted earlier in the week, that indicate the system is healthy. We will continue to monitor the systems for any further network abnormalities.

Again, thank you for your patience and understanding this week while we addressed one of the most significant outages in the history of PACE.

Please contact us at pace-support@oit.gatech.edu with any questions or if you observe unexpected behavior on the cluster.

[ Update 8/13/20 8:30 PM ]

We continue to work on the network issues impacting the Rich datacenter.  We have partitioned the network and adjusted connections in an effort to isolate the problem. As mentioned this morning, we have ordered parts to address potential problematic switches as we continue systematic troubleshooting of them. We continue to run tests on InfiniBand, and we are running an overnight stress test on the network to monitor for reoccurrence of errors. The schedulers remain paused to prevent further jobs being launched on the cluster. We will follow up tomorrow with an update on the Rich cluster network.

Thank you for your continued patience and understanding during this outage.

[ Update 8/13/20 10:10 AM ]

Unfortunately, after the nearly-80-hour outage earlier this week, we must report another network outage.  We apologize for this inconvenience, as we do understand the impact of this to your research. The network/InfiniBand issues in the Rich datacenter began reoccurring late yesterday evening, and we are aware of the issues. We are currently working to resolve them, and we have ordered replacements for the parts of the network switches that appear problematic. The issue was not detected via our deterministic testing methods and occurred only after restarting user production jobs caused very heavy network utilization. We will provide further updates once more information is available.  As before, you may experience slowness in accessing storage (home, project, and/or scratch) and/or issues with communication within MPI jobs.
We have paused all the schedulers for clusters in Rich datacenter that are accessed by the following headnodes/login nodes: login-s, login-d, login7-d, novazohar, gryphon, and testflight-login. This pause prevents additional jobs from starting, but already-running jobs have not been stopped. However, there is a chance they will be killed as we continue to work to resolve the network issues.
Please note that this network issue does not impact the Coda datacenter (Hive and testflight-coda) or CUI clusters in the Rich datacenter.
Thank you for your continued patience as we continue to work to resolve this issue.
Please contact us with any questions or concerns at pace-support@oit.gatech.edu.

[ Update 8/12/20 6:20 PM ]

After nearly 80 hours of Rich datacenter outage due to network/InfiniBand issues, we have been able to bring up the PACE compute nodes in the Rich datacenter, and user jobs have begun to run again. We thank you for your patience during this period, and we understand the significant impact of this outage on your research this week.
For any user jobs that were killed due to restarts yesterday, please resubmit the jobs to the queue at this time. Please check the output of any recent jobs and resubmit any that did not succeed.
As noted yesterday evening, we have carefully brought nodes back into production in small groups to identify issues, and we have turned off nodes that we identified as having network difficulties. Our findings point to multiple hardware problems that caused InfiniBand connectivity problems between nodes. We addressed these issues, and we are no longer observing the errors after our extensive testing. We will continue to monitor the systems, but please contact us immediately at pace-support@oit.gatech.edu if you notice your job running slowly or failing to produce output.
Please note that we will continue to work on problematic nodes that are currently offline in order to restore compute access to all PACE users, and we will contact affected users as needed.
Again, thank you for your patience and understanding this week while we addressed one of the most impactful outages in the history of PACE.
Please contact us at pace-support@oit.gatech.edu with any questions.

[ Update 8/12/20 12:30 AM ]

We continue to work to bring PACE nodes back into production. After turning off all the compute nodes and reseating faulty network connections we identified, we have been slowly bringing nodes back up to avoid overwhelming the network fabric, which has been clean so far.  We are carefully testing each group to ensure full functionality, and we continue to identify challenging nodes and repair them where possible. At this time, the schedulers remain paused while we turn on and test nodes. We will provide additional updates as more progress is made.

[ Update 8/11/20 5:15 PM]

We continue to troubleshoot the network issues in the Rich datacenter. Unfortunately, our efforts to avoid disturbing running jobs have complicated the troubleshooting, which has not led to a resolution. At this time, we need to begin systematic rebooting of many nodes, which will kill some running user jobs. We will contact users with current running jobs directly to alert you to the effect on your jobs.

Our troubleshooting today has included reseating multiple spine modules in the main datacenter switch, adjusting uplinks between the two main switches to isolate problems, and rebooting switches and some nodes already.

We will continue to provide updates as more information becomes available. Thank you for your patience during this outage.

[ Update 8/10/20 11:35 PM ]

We have made several changes to create a more stable Infiniband network, including deploying an updated subnet manager, bypassing bad switch links, and repairing GPFS filesystem errors. However, we have not yet been able to uncover all issues the network is facing, so affected schedulers remain paused for now, to ensure that new jobs do not begin when they cannot produce results.

We will provide an update on Tuesday as more information becomes available. We greatly appreciate your patience as we continue to troubleshoot.

[ Update 8/10/20 6:20 PM ]

We are continuing to troubleshoot network issues in Rich. At this time, we are working to deploy an older backup subnet manager, and we will test the network again to determine if communication has been restored after that step.

The schedulers on the affected clusters remain paused, to ensure that new jobs do not begin when they cannot produce results.

We recognize that this outage has a significant impact on your research, and we are working to restore functionality in Rich as soon as possible. We will provide an update when more information becomes available.

[ Update 8/9/20 11:55 PM]

We have restarted PACE’s Subnet Manager in Rich, but some network slowness remains. We are continuing to troubleshoot the problem. At this time, we plan to leave the Rich schedulers paused overnight in order to ensure that the issue is fully resolved before additional jobs begin, so that they will be able to run successfully.
We will provide further updates on Monday.

[ Original Post]

At approximately noon today, we began experiencing issues with our primary InfiniBand Subnet Manager in Rich data center.  PACE is investigating this issue.  We will provide an update when additional information or a resolution is available.  At this time, you may experience slowness in accessing storage (home, project, or scratch) or issues with communication within MPI jobs.

In order to minimize impact to jobs, we have paused all schedulers on the affected clusters (accessed via login-s, login-d, login7-d, novazohar, gryphon, and testflight-login headnodes). This will prevent additional jobs from starting, but jobs that are already running will not be stopped, although they may fail to produce results due to the network issues.

This issue does not impact the Coda data center (Hive & testflight-coda clusters) or CUI clusters in the Rich data center.

Please contact us with any questions or concerns at pace-support@oit.gatech.edu.

Comments are closed.