GT Home : : Campus Maps : : GT Directory

Archive for category Uncategorized

PACE Ready for Research

Posted by on Saturday, 18 May, 2019

Our May 2019 maintenance ( is complete one day ahead of schedule! We have brought compute nodes online and released previously submitted jobs.  Login nodes are accessible and your data are available.  We are postponing the replacement of CMOS batteries on the servers due to scheduling conflict with the vendor.  As usual, there are a small number straggling nodes we will address over the coming days.


  • (Complete) Upgrade testflightcluster to RHEL 7.6
  • (Complete) Upgrade gemini-gpuand gemini-cpu clusters to RHEL7, which will require user action (only for gemini-cpu/gpu clusters‘ users)
  • (Complete) Switch nodes between chemxand gemini-cpu queues
  • (Postponed) Replace CMOS batteries on multiple servers


  • (Complete) Replace a faulty InfiniBand switch, which affects a single rack with no impact to the complete fabric
  • (Complete) Migrate Rich to campus connections to 10Gbps


  • (Complete) Reboot ICE storage servers to correct issues with backup application
  • (Complete)  Perform detailed performance analysis of the GPFS environment, in order to fine tune parameters to improve performance


  • (Postponed) Updates to the submit filters in the schedulers
  • (Complete) Update salt master and minions


If you have any questions or concerns, please contact

[Complete] PACE quarterly maintenance – May 16-18, 2019

Posted by on Tuesday, 7 May, 2019

[Update – 05/09/2019] Our final quarterly maintenance schedule will include the following list of tasks:


  • (no user action needed) Replace CMOS batteries on multiple servers
  • (no user action needed) Upgrade testflight cluster to RHEL 7.6
  • (some user action needed) Upgrade gemini-gpu and gemini-cpu clusters to RHEL7, which will require user action (only for gemini-cpu/gpu clustersusers)
  • (no user action needed) Switch nodes between chemx and gemini-cpu queues


  • (no user action needed) Replace a faulty InfiniBand switch, which affects a single rack with no impact to the complete fabric
  • (no user action needed) Migrate Rich to campus connections to 10Gbps


  • (no user action needed) Reboot ICE storage servers to correct issues with backup application
  • (no user action needed) Perform detailed performance analysis of the GPFS environment, in order to fine tune parameters to improve performance


  • (no user action needed) Updates to the submit filters in the schedulers
  • (no user action needed) Update salt master and minions


[Original Post – May 7, 2019 – 12:32pm] We are preparing for a maintenance day on May 16, 2019. This maintenance day is planned for three days and will start on Thursday May 16 and go through Saturday, May 18. 

As usual, jobs with long walltimes will be held by the scheduler to ensure that no active jobs will be running when systems are powered off. These jobs will be released as soon as the maintenance activities are complete.

In general, we will perform maintenance on PACE Network and migrate from 10Gbps to 40Gbps connections,  GPFS storage performance analysis, upgrade schedulers,  replace CMOS batteries, upgrade testflight cluster to the latest RHEL 7 kernel, 3.10.0-957.12.1, i.e., RHEL 7.6.

While we are still working on finalizing the task list and details, none of these tasks are expected to require any user actions.

Brief Interruption to PACE VPN During Service Maintenance

Posted by on Friday, 3 May, 2019

[May 3, 2019 – 4:53pm] On May 7, 2019, from 8:00pm (EST)- 9:00pm (EST), GT IT will be conducting maintenance of our VPN service.  During this period, users that are connected to our ITAR/ASDL/CUI  clusters via the VPN ( will be disconnected.  This interruption will be brief (about 5 minutes), then you may  reconnect to the VPN and then the cluster.  This service maintenance will not impact any of the running batch jobs, but it may impact running interactive jobs during this period.  For additional details on the maintenance taking place, please visit the following link.

Thank you for your attention to this maintenance that GT IT is conducting.


PACE Procurement Timeline Adjustments

Posted by on Friday, 29 March, 2019

PACE Staff have completed our move to the CODA building and are settling in. We’ve also added a couple of new faces to the team, announcements will be forthcoming shortly.

As the year-end purchasing deadlines approach, we wanted to update the community on some changes to our procurement calendar. We’re doing our best to advocate for the research community and navigate some tough realities. We’ve nearly exhausted our space in the Rich Computer Center, and are very limited in our ability to deploy new equipment in that space. The CODA datacenter will be our new home (more on that below) but is not quite ready yet.

As such, we have cancelled the previously planned FY19-Phase3 and will need to shift some dates for our last order in FY19, FY19-Phase4. This shift results in FY19-Phase4 and FY20-Phase1 essentially being deployed concurrently around October of 2019. For this reason, we strongly encourage faculty to participate in FY20-Phase1 and reserve FY19-Phase4 for those who need to use funds expiring in FY19.

We will also adjust configurations and pricing for FY19-Phase4 and FY20-Phase1 based on upcoming processing technology and market conditions once that pricing is available to the public.

Finally, planning is in progress for PACE to migrate existing research cyberinfrastructure from the Rich data center to CODA, and all efforts will be made to minimize disruption to research efforts during this move. The execution phase will not begin until at least October 2019.

To view the published schedule online or for more information, visit or email

Best Regards,

-PACE Team

[Resolved] PACE VM Migration – impacting various services

Posted by on Wednesday, 6 March, 2019

[March 7, 2018 – 12:33pm]  We completed migrating our virtual servers, and restored access to the testflight and novazohar clusters.  If you should encounter any issues, please let us know at

Tasks completed:

Complete – Migrate two license servers

Complete – Migrate testflight headnode

Complete – Migrate novazohar headnode

Complete – Migrate testflight scheduler


[March 6, 2018 – 10:44am] PACE will be migrating two license servers, testflight headnode, testflight scheduler, and novazohar headnode.  This migration will be very brief that will take as long as rebooting the systems.  We are reserving 30 minutes for this service on Thursday, March 7 at 12:00pm.  This will impact you very briefly that will include inability to connect to the designated login/headnodes (i.e., novazohar, and testflight) as well as possible inability to submit jobs in which applications require a license.   This service should not impact any running jobs.

If you have any questions, please don’t hesitate to contact us at

Campus network experiencing intermittent network latency

Posted by on Monday, 4 March, 2019

Office of Information Technology reported intermittent network latency impacting parts of the campus network.  This would present as occasional slowness and timeouts when accessing PACE managed resources and access from PACE to non-PACE license servers, etc.  This may have caused new jobs to fail during attempts to check out software license that are not managed by PACE.  OIT has installed additional capacity, isolated and neutralized part of the cause of the issue, which is currently being monitored for any further network traffic issues.

For details and updates to this incident, please refer to OIT’s status page detailing this incident.

If you have any questions, please don’t hesitate to contact


[Complete] PACE staff is moving to Coda building

Posted by on Monday, 4 March, 2019

[March 20, 2019] This is a friendly note to confirm that PACE staff has moved over to Coda building.  While we have moved out of Rich Building, we continue to monitor the Rich data center as we have in the past.  If you have any questions or concerns, please contact us at

[Original Post – March 4, 2019]As you may already know, PACE Team will be moving to CODA during the weeks of March 11 and March 18, more specifically, our offices will be in transition on March 15 and March 18. Please note, this move is only for the staff members and not the data center. Data center will continue to operate as usual, but our team’s responses may be delayed during this period, especially on March 15 and 18.

If you have any questions, please don’t hesitate to contact us at

[Resolved] Storage problem impacting applications and login

Posted by on Thursday, 28 February, 2019

At about 2:30pm, during a routine storage server procedure, we experienced a problem that was related to a service not starting properly. We have resolved the issue within 15 minutes. This incident caused temporary unavailability of some applications and home directories. The symptoms include hanging commands, codes, and login attempts.

We believe most jobs have resumed operation after the issue is resolved, but we can’t be sure. Please check your jobs to identify if there are any crashed jobs and report any problems you may notice to

Thank you for your attention, and apologies for this inconvenience.

[Resolved] Expected Network Interruptions Due to Campus Network Maintenance – Intermittent delays or disruption to major campus IT services

Posted by on Monday, 18 February, 2019

[Original Post – February 18, 2019] On Sunday, Feb. 24, OIT will perform a series of data center upgrades and migrations. This service window includes intermittent delays or disruption to major campus IT services between 7 a.m. and 8 p.m. as well as occasional interruptions in wireless connectivity between 9 a.m. and 12 p.m.

During this service upgrade, the intermittent service interruptions will result in periods when users may not be able to connect to PACE managed resources or they may be disconnected from their sessions, which may  interrupt interactive jobs that rely on an active SSH connection to a given cluster.   However, these upgrades will not impact running or queued batch jobs.  OIT anticipates all the service upgrades and migrations to conclude by 8 p.m., and PACE users should resume their work as usual.

For additional information and details on the services that OIT will be upgrading and migrating, please refer to the status page link at

PACE clusters ready for research

Posted by on Saturday, 16 February, 2019
Our February 2019 maintenance ( is complete on schedule. We have brought compute nodes online and released previously submitted jobs. Login nodes are accessible and your data are available. As usual, there are a small number of straggling nodes we will address over the coming days.
Please let us know any problems you may notice:

* (COMPLETE) Vendor will replace defective components on groups of servers

* (COMPLETE) Ethernet network reconfiguration

* (COMPLETE) GPFS / DDN enclosure reset

* (COMPLETE) NAS maintenance and reconfiguration

• (COMPLETE) PACE VMWare reconfiguration to remove out of support hosts

* (COMPLETE) Migration of Megatron cluster to RHEL7