GT Home : : Campus Maps : : GT Directory

Archive for category Uncategorized

Storage (GPFS) and datacenter problems resolved

Posted by on Monday, 19 June, 2017

All nodes and GPFS filesystem issues affected from the power failure should be resolved as of late Friday evening (June 16) . If you are still experiencing problems, please let us know at pace-support@oit.gatech.edu.

PACE is experiencing storage (GPFS) problems

Posted by on Friday, 16 June, 2017

We are experiencing intermittent problems with the GPFS storage system that hosts most of the project directories.

We are working with the vendor to investigate the ongoing issues. At this moment we don’t know whether they are related to yesterday’s power/cooling failures or not, but we will update the PACE community as we find out more.

This issue has potential impact on running jobs and we are sorry for this inconvenience.

PACE datacenter experienced a power/cooling failure

Posted by on Friday, 16 June, 2017
What happened: We had a brief power failure in our datacenter, which took out cooling in racks running chilled water. This impacted about 160 nodes from various queues, with potential impact on running jobs.
Current Situation: Some cooling has been restored, however we had to issue a shut down to a couple of the highest temperature racks that were not cooling down (p41, k30, h43, c29, c42). We are keeping a close eye on the remaining racks that were in the risk area in coordination with the Operations team as they continue to monitor temperatures in these racks.
We will start bringing the down nodes online once the cooling issue is fully resolved.
What can you do: Please resubmit failed jobs (if any) if you were using any of the queues listed below. As always, contact pace-support@oit.gatech.edu for any kind of assistance you may need.
Thank you for your patience and sorry for the inconvenience.

Impacted Queues:

—————————
apurimac-6
apurimacforce-6
atlas-6
atlas-debug
b5force-6
biobot
biobotforce-6
bioforce-6
breakfix
cee
ceeforce
chemprot
chowforce-6
cnsforce-6
critcel
critcel-burnup
critcelforce-6
critcel-prv
cygnus
cygnus-6
cygnus64-6
cygnusforce-6
cygnus-hp
davenprtforce-6
dimerforce-6
ece
eceforce-6
enveomics-6
faceoff
faceoffforce-6
force-6
ggate-6
granulous
gryphon
gryphon-debug
gryphon-prio
gryphon-tmp
hygeneforce-6
isabella-prv
isblforce-6
iw-shared-6
martini
mathforce-6
mayorlab_force-6
mday-test
medprint-6
medprintfrc-6
megatron
megatronforce-6
microcluster
micro-largedata
monkeys_gpu
mps
njordforce-6
optimusforce-6
prometforce-6
prometheus
radiance
rombergforce
semap-6
skadi
sonarforce-6
spartacusfrc-6
threshold-6
try-6
uranus-6

Large Scale Problem

Posted by on Wednesday, 7 June, 2017

Update (6/7/2017, 1:20pm): The network issues are now addressed and systems are back in normal operation.Please check your jobs and resubmit failed jobs as needed. If you continue to experience any problems, or need our assistance for anything else, please contact us at pace-support@oit.gatech.edu. We are sorry for this inconvenience and thank you once again for your patience.

Original message: We are experiencing a large scale network problem impacting multiple storage servers and software repository with a potential impact on running jobs. We are currently actively working to get it resolved and will provide updates as much as possible. We appreciate your patience and understanding, and are committed to resolving the issue as soon as we possibly can.

Infiniband switch failure causing partial network and storage unavailability

Posted by on Thursday, 25 May, 2017
We experienced an infiniband (IB) switch failure, which impacted several racks of nodes that are connected to this switch. This issue caused MPI job crashes and GPFS unavailability.

The switch is now back online and it’s safe to submit new jobs.

If you are using one or more of the queues (listed below), please check your jobs and re-submit them if necessary. One indication of this issue is “Stale file handle” error messages that may appear in the job output or logs.

Impacted Queues:
=============
athena-intel
atlantis
atlas-6-sunge
atlas-intel
joe-6-intel
test85
apurimacforce-6
b5force-6
bioforce-6
ceeforce
chemprot
cnsforce-6
critcelforce-6
cygnusforce-6
dimerforce-6
eceforce-6
faceoffforce-6
force-6
hygeneforce-6
isblforce-6
iw-shared-6
mathforce-6
mayorlab_force-6
medprint-6
nvidia-gpu
optimusforce-6
prometforce-6
rombergforce
sonarforce-6
spartacusfrc-6
try-6
testflight
novazohar

PACE clusters ready for research

Posted by on Friday, 12 May, 2017

Our May 2017 maintenance period is now complete, far ahead of schedule. We have brought compute nodes online and released previously submitted jobs. Login nodes are accessible and data available. As usual, there are some straggling nodes we will address over the coming days.

Our next maintenance period is scheduled for Thursday, August 10 through Saturday, August 12, 2017.

New operating system kernel

  • All compute, interactive, and head nodes have received the updated kernel. No user action needed.

DDN firmware updates

  • This update brought low level firmware on drives up to date per recommendation from DDN. No user action needed.

Networking

  • DNS/DHCP and firewall updates per vendor recommendation applied by OIT Network Engineering.
  • IP address reassignments for some clusters completed. No user action needed.

Electrical

  • Power distribution repairs completed by OIT Operations. No user action needed.

PACE quarterly maintenance – May 11, 2017

Posted by on Monday, 8 May, 2017

PACE clusters and systems will be taken offline at 6am this Thursday (May 11) through the the end of Saturday (May 13). Jobs with long walltimes will be held by the scheduler to prevent them from getting killed when we power off the nodes. These jobs will be released as soon as the maintenance activities are complete.

Planned improvements are mostly transparent to users, requiring no user action before or after the maintenance.

Systems

  • We will deploy a recompiled kernel that’s identical to the current version except for a patch that addresses the dirty cow vulnerability. Currently, we have mitigation in place that prevents the use of debuggers and profilers (e.g. gdb, strace, Allinea DDT, etc). After the deployment of the patched kernel, these functions will once again be available for all nodes. Please let us know if you continue to have problems debugging or profiling your codes after the maintenance day.

Storage

  • Firmware updates on all of the DDN GPFS storage (scratch and most of the project storage)

Network

  • Upgrades to DNS servers, as recommended and performed by OIT Network Engineering
  • Software upgrades to the PACE firewall appliance to address a known bug
  • New subnets and re-assignment of IP addresses for some of the clusters

Power

  • PDU fixes that are impacting 3 nodes in c29 rack

The date for the next maintenance day is not certain yet, but we will announce it as soon as we have it.

College of Engineering (COE) license servers available starting 5:10 pm yesterday

Posted by on Wednesday, 12 April, 2017

Starting 5:10 pm 11 April 2017, COE license servers are available again.

Multiple Georgia power outages are plaguing multiple license servers on campus. All efforts have been made to keep systems available. If your jobs report missing or unavailable licenses, please check http://licensewatcher.ecs.gatech.edu/ for the most up to date information.

College of Engineering license servers going dark at 3:35 pm

Posted by on Tuesday, 11 April, 2017

College of Engineering (COE) license servers will go dark at 3:35pm. Research and Instruction to be impacted.

COE system engineers have stated: Running out of UPS run time. Ansys / Comsol / Abaqus / Solidworks and other software will go dark. Matlab / Autocad and NX should still be up (running in a different location).

Please test the new patched kernel on TestFlight nodes

Posted by on Wednesday, 1 March, 2017

As some of you are already aware, the dirty cow exploit was a source of great concern for PACE. This exploit can allow a local user to gain elevated privileges. For more details, please see “https://access.redhat.com/blogs/766093/posts/2757141”.

In response, PACE has applied a mitigation on all of the nodes. While this mitigation is effective in protecting the systems, it has a downside of causing debugging tools (e.g. strace, gdb and DDT) to stop working. Unfortunately, none of the new (and patched) kernel versions made available by Red Hat supports our Infiniband network drivers (OFED), so we had to leave the mitigation running for a while. This caused inconvenience, particularly for users who are actively developing codes and relying on these debuggers.

As a long term solution, we patched the source code of the kernel and recompiled it, without changing anything else. Our initial tests were successful, so we deployed it on three of the four online nodes in the testflight queue:

rich133-k43-34-l recompiled kernel
rich133-k43-34-r recompiled kernel
rich133-k43-35-l original kernel
rich133-k43-35-r recompiled kernel

We would like to ask you to please test your codes on this queue. Our plan is to deploy this recompiled kernel to all of the PACE nodes, including headnodes and compute nodes. We would like to make sure that your codes will continue to run after this deployment without any difference.

The deployment will be a rolling update, that is, we will opportunistically patch nodes starting from the idle nodes. So, there will be a mix of nodes with old and recompiled kernels in the same queues until the deployment is complete. For this reason, we strongly recommend testing multi-node parallel applications that will include the node with the original kernel (rich133-k43-35-l) in the hostlist to test the behavior of your code with mixed hostlists.

As always, please keep your testflight runs short to allow other users to test their own codes. Please report any problems to pace-support@oit.gatech.edu and we will be happy to help. Hopefully, this deployment will be completely transparent to most users, if not all.