GT Home : : Campus Maps : : GT Directory

Archive for November, 2018

Brief Interruption to VPN During Urgent VPN Service Maintenance

Posted by on Wednesday, 28 November, 2018

On November 29, 2018, from 10:00pm (EST)- 11:00pm (EST), OIT will be conducting maintenance of our VPN service.  During this period, users that are connected to our clusters via the VPN ( will be disconnected, and you will need to reconnect to the VPN and then the cluster.  This service maintenance will not impact any of the running batch jobs, but it may impact running interactive jobs during this period.  For additional details on the maintenance taking place, please visit the following site:

Thank you for your attention to this urgent maintenance that OIT is conducting.

[Resolved] CoC-ICE Cluster: Multi-node job problem

Posted by on Wednesday, 21 November, 2018

[Update – November 26, 2018] We’ve identified the issue and resolved the configuration error.  Users are now able to submit multi-node jobs on the CoC-ICE cluster.

[Original Post – November 21, 2018]

We are investigating an issue in which users experience hanging jobs when they submit a multi-node job on CoC-ICE cluster.   This issue does not impact users who are submitting jobs on a single node.  Also, this issue is not impacting the PACE-ICE cluster.

Thank you for your patience, and we apologize for this inconvenience while we resolve this issue.

[Resolved] ICE Clusters – Intermittent account problems

Posted by on Thursday, 8 November, 2018

We received multiple reports about jobs crashing after being allocated on the instructional clusters (COC-ICE and PACE-ICE).   We’ve determined that intermittent account problems are the cause of this issue, and we are working towards a solution.

Thank you for your patience, and we apologies for the inconvenience.


[RESOLVED] Scratch storage problems

Posted by on Wednesday, 7 November, 2018

We received multiple reports of jobs crashing due to insufficient scratch storage, but our physical usage is only at 41%.

We’ve identified the issue is related to the disk pools that were not able to migrate data to other pools internally as a result of a threshold process/procedure that was not started post maintenance day.  Now, we initiated this process, and we are migrating the data to appropriate pools, which should resolve the issues experienced in jobs crashing due to insufficient scratch storage.

We will continue to monitor the scratch storage to ensure its operation is optimal.  If you experience any further issues, please contact

Thank you for your patience, and apologies for the inconvenience.

PACE clusters ready for research

Posted by on Saturday, 3 November, 2018

Our November 2018 maintenance ( is complete on schedule. We have brought compute nodes online and released previously submitted jobs. Login nodes are accessible and your data are available. As usual, there are a small number of straggling nodes we will address over the coming days, which includes nodes that will need PCIe connectors replaced as a preventative measure.

Completed Tasks


  • Complete – (no user action needed) Replace power components in a rack in Rich 133
  • Complete(no user action needed) Replace defective PCIe connectors on multiple servers
      • As a precaution, additional identified nodes will have their PCIe connectors replaced  when parts are delivered.  There will be no user action needed.


  • Complete(no user action needed) Stress test new InfiniBand subnet managers, to prepare for the move to Coda
  • Complete(no user action needed) Change uplink connections from management switches


  • Complete(no user action needed) Verify integrity of GPFS file systems
  • Complete(no user action needed) Upgrade firmware on DDN / GPFS storage systems
  • Complete(no user action needed) Upgrade firmware on TruNAS storage systems


  • Complete (some user action needed) Replaced PACE ICE schedulers with a physical server, to increase capacity and reliability.   Some jobs on PACE ICE cluster need to be re-submitted, and we have contacted the affected users individually.