The storage problem has been fixed, and the nodes are available for use. Thanks for your patience.
The storage problem has been fixed, and the nodes are available for use. Thanks for your patience.
We’ve got the switch back. The outage looks to have caused our virtual machine farm to reboot, so connections to head nodes will have been dropped.
This also affected the network path between compute nodes and the file servers. With a little luck, the NFS traffic should resume, but you may want to check on any running jobs to make sure.
Word from the network team is that they were following published instructions from the switch vendor to integrate the two switches when the failure occurred. We’ll be looking into pretty intensely, as this these switches are seeing a lot of deployments in other OIT functions.
Hi folks,
In an attempt to restore network redundancy from the switch failure on 10/31, the Campus Network team has experienced some troubles connecting the new switch. At this point, the core of our HPC network is non-functional. Senior experts from the network team are working on restoring connectivity as soon as possible.
This morning, we found the hp8, hp10, hp12, hp14, hp16, hp18, hp20, hp22, hp24, and hp26 filesystems full. All of these filesystems reside on the same fileserver and share capacity. The root cause was a an oversight on our part – a lack of quota enforcement on a particular users home directory. The proper 5GB home directory quotas have been reinstated and we are working with this user to move their data to their project directory. We’ve managed to free up a little space at the moment, but it will take a little time to move a couple TB of data. We’re also doing an audit to ensure that all appropriate storage quotas are in place.
This would have affected users on the following clusters:
Greetings all,
As I’m sure some of you are aware, next week is the annual Supercomputing ’11 conference in Seattle. Many of the PACE staff will be attending, but Brian MacLeod and Andre McNeill have graciously agreed to hold the fort here. The rest of us will be focused on conference activities but will have connectivity and can assist with urgent matters should it be required.
The disk array rebuild has completed. Some nodes were being brought up during the rebuild to help take some jobs, but now all should be online.