GT Home : : Campus Maps : : GT Directory

Author Archive

[Complete] PACE Maintenance Period: May 11 – 13, 2022

Posted by on Tuesday, 3 May, 2022

[Update 5/16/22 9:20 AM]

All PACE clusters, including Phoenix, are now ready for research and learning. We have restored stability of the Phoenix Lustre storage system and released jobs on Phoenix.

Thank you for your patience as we worked to restore Lustre project & scratch storage on the Phoenix cluster. In working with our support vendor, we identified a scanning tool that was causing instability on the scratch filesystem and impacting the entire storage system. This has been disabled pending further investigation.

Due to the complications, we will not proceed with monthly deletions of old files on the Phoenix & Hive scratch filesystems tomorrow. Although only Phoenix was impacted, we will also delay Hive to avoid confusion. Files for which researchers were notified this month will not be deleted at this time, and you will receive another notification prior to any future deletion. Researchers are still encouraged to delete unneeded scratch files to preserve space on the system.

Campus network disaster recovery testing will disable Phoenix, Hive, PACE-ICE, and COC-ICE from 5:00 PM on Friday, June 10, through 12:00 noon on Monday, June 13. The next maintenance period for all PACE clusters is August 10, 2022, at 6:00 AM through August 12, 2022, at 11:59 PM. An additional maintenance period is tentatively scheduled for November 2-4.

Status of activities:

ITEMS REQUIRING USER ACTION:

  • None expected on research clusters

ITEMS NOT REQUIRING USER ACTION:

  • [Complete][ICE only][System] PACE-ICE and COC-ICE instructional clusters will receive an operating system upgrade to RHEL7.9, to match the research clusters. Visit our documentation for a guide on potential impacts. A testflight environment is not available for ICE.
  • [Postponed][Phoenix, Hive][Open OnDemand] Deploy R 8.3 on Open OnDemand
  • [Complete][Phoenix][Storage] multiple upgrades to Lustre project and scratch storage
  • [Complete][Hive][Storage] replace cable connecting GPFS project and scratch storage
  • [Complete][Network] Upgrade interfaces to 100 GbE on Globus Vapor endpoint and border storage
  • [Complete][Network] Add redundant 100GbE switch to storage servers, increasing capacity
  • [Complete][System] Install operating system patches
  • [Complete][System] Update operating system on administrative servers
  • [Complete][Network] Move BCDC DNS appliance to new IP address
  • [Complete][Hive][System] Upgrade cuda and Nvidia drivers on Hive to match other clusters with cuda 11.5
  • [Complete][System] Remove unused nouveau graphics kernel from GPU nodes
  • [Complete][Network] Set static IP addresses on schedulers to improve reliability
  • [Complete][Datacenter] Cooling loop maintenance
  • [Complete][Datacenter] Georgia Power Microgrid testing

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

 

[Update 5/13/22 3:25 PM]

The PACE team and our support vendor’s engineers continue working to restore functionality of the Phoenix Lustre filesystem following the upgrade. Testing and remediation will continue today and through the weekend. At this time, we hope to be able to open Phoenix for research on Monday. We appreciate your patience as our maintenance period is extended. If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

 

[Update 5/13/22 2:00 PM]

PACE maintenance continues on Phoenix, while the Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard clusters are now ready for research and learning.

Phoenix remains under maintenance, as complications arose following the upgrade of Lustre project and scratch storage. PACE and our storage vendor are working to resolve the issue at this time. We will update you when Phoenix is ready for research.

Jobs on the Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard clusters have been released.

Campus network disaster recovery testing will disable Phoenix, Hive, PACE-ICE, and COC-ICE from 5:00 PM on Friday, June 10, through 12:00 noon on Monday, June 13. The next maintenance period for all PACE clusters is August 10, 2022, at 6:00 AM through August 12, 2022, at 11:59 PM. An additional maintenance period is tentatively scheduled for November 2-4.

Status of activities:

ITEMS REQUIRING USER ACTION:

  • None expected on research clusters

ITEMS NOT REQUIRING USER ACTION:

  • [Complete][ICE only][System] PACE-ICE and COC-ICE instructional clusters will receive an operating system upgrade to RHEL7.9, to match the research clusters. Visit our documentation for a guide on potential impacts. A testflight environment is not available for ICE.
  • [Postponed][Phoenix, Hive][Open OnDemand] Deploy R 8.3 on Open OnDemand
  • [In progress][Phoenix][Storage] multiple upgrades to Lustre project and scratch storage
  • [Complete][Hive][Storage] replace cable connecting GPFS project and scratch storage
  • [Complete][Network] Upgrade interfaces to 100 GbE on Globus Vapor endpoint and border storage
  • [Complete][Network] Add redundant 100GbE switch to storage servers, increasing capacity
  • [Complete][System] Install operating system patches
  • [Complete][System] Update operating system on administrative servers
  • [Complete][Network] Move BCDC DNS appliance to new IP address
  • [Complete][Hive][System] Upgrade cuda and Nvidia drivers on Hive to match other clusters with cuda 11.5
  • [Complete][System] Remove unused nouveau graphics kernel from GPU nodes
  • [Complete][Network] Set static IP addresses on schedulers to improve reliability
  • [Complete][Datacenter] Cooling loop maintenance
  • [Complete][Datacenter] Georgia Power Microgrid testing

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

 

 

 

[Detailed announcement 5/3/22]

As previously announced, our next PACE maintenance period is scheduled to begin at 6:00 AM on Wednesday, May 11, and end at 11:59 PM on Friday, May 13. As usual, jobs that request durations that would extend into the maintenance period will be held by the scheduler to run after maintenance is complete. During the maintenance window, access to all PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard.

Tentative list of activities:

ITEMS REQUIRING USER ACTION:

  • None expected on research clusters

ITEMS NOT REQUIRING USER ACTION:

  • [ICE only][System] PACE-ICE and COC-ICE instructional clusters will receive an operating system upgrade to RHEL7.9, to match the research clusters. Visit our documentation for a guide on potential impacts. A testflight environment is not available for ICE.
  • [Phoenix, Hive][Open OnDemand] Deploy R 8.3 on Open OnDemand
  • [Phoenix][Storage] multiple upgrades to Lustre project and scratch storage
  • [Hive][Storage] replace cable connecting GPFS project and scratch storage
  • [Network] Upgrade interfaces to 100 GbE on Globus Vapor endpoint and border storage
  • [Network] Add redundant 100GbE switch to storage servers, increasing capacity
  • [System] Install operating system patches
  • [System] Update operating system on administrative servers
  • [Network] Move BCDC DNS appliance to new IP address
  • [Hive][System] Upgrade cuda and Nvidia drivers on Hive to match other clusters with cuda 11.5
  • [System] Remove unused nouveau graphics kernel from GPU nodes
  • [Network] Set static IP addresses on schedulers to improve reliability
  • [Datacenter] Cooling loop maintenance
  • [Datacenter] Georgia Power Microgrid testing

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

[Early announcement]

Dear PACE Users,

This is friendly reminder that our next Maintenance period is scheduled to begin at 6:00AM on Wednesday, 05/11/2022, and it is tentatively scheduled to conclude by 11:59PM on Friday, 05/13/2022. As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During the Maintenance Period, access to all the PACE managed computational and storage resources will be unavailable.

If you have any questions or concerns, please do not hesitate to contact us at pace-support@oit.gatech.edu.

Best,

The PACE Team

Hive Gateway Resource Now Available to Campus Champions

Posted by on Monday, 25 April, 2022

Dear Campus Champion Community,

We are pleased to announce the official release of the Hive Gateway at Georgia Tech’s Partnership for an Advanced Computing Environment (PACE) to the Campus Champion community. The Hive gateway is powered by Apache Airavata, and provides access to a portion of the Hive cluster at GT that is an NSF MRI funded supercomputer that delivers nearly 1 Linpack petaflops of computing power.  For more hardware details see the following link: https://docs.pace.gatech.edu/hive/resources/.

The Hive Gateway is available to *any* XSEDE researcher via federated login (i.e., CILogon), and has a variety of applications available including Abinit, Psi4, NAMD, a python environment with Tensorflow and Keras installed, among others.

Hive Gateway is accessible via https://gateway.hive.pace.gatech.edu

Our user guide is available at: https://docs.pace.gatech.edu/hiveGateway/gettingStarted/ and contains details on the process of getting access.  Briefly, to get access to the Hive gateway, go to “Log In” on the site, select XSEDE credentials via CILogon, which should allow you to log into the gateway and generate a request to our team to approve your gateway access and enable job submissions on the resource.

Please feel free to stop by the Hive gateway site, try it out, and/or direct your researchers to it.

Cheers!

– The PACE Team

Launch of Open OnDemand Portal for PACE’s Phoenix and Hive Clusters

Posted by on Tuesday, 19 April, 2022

Dear PACE Researchers, 

We are pleased to announce the official release of the Open OnDemand (OOD) portal for PACE’s Phoenix and Hive clusters! OOD portal allows you to access PACE compute resources through your browser, and OOD provides a seamless interface for several different interactive applications, including Jupyter, Matlab, and a general interactive desktop environment. Each PACE cluster has its own portal, allowing access to all your data as usual with the Web interface. 

In-depth documentation on OOD at PACE is available at https://docs.pace.gatech.edu/ood/guide, and links to the portal for each PACE cluster are listed below: 

Please note that you will need to be on the GT VPN in order to access the OOD portals.

Thursday’s PACE clusters orientation will feature a demo using OOD. To register for upcoming PACE clusters orientation, visit https://b.gatech.edu/3w6ifqO.  

Please direct any questions about Open OnDemand to our ticketing system via email to pace-support@oit.gatech.edu or by filling out a help request form.  

Cheers! 

– The PACE Team 

[Complete PACE Maintenance Period – February 9 – 11, 2022] PACE Clusters Ready for Research!

Posted by on Thursday, 10 February, 2022

Dear PACE Users,

All PACE clusters, including Phoenix, Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard, are ready for research. As usual, we have released all user jobs that were held by the scheduler.

Due to complications with the RHEL7.9 upgrade, 36% of Phoenix compute nodes remain under maintenance. We will work to return the cluster to full strength in the coming days. All node classes and queues have nodes available, and all storage is accessible.

Researchers who did not complete workflow testing on our Testflight environments on Phoenix and Hive, and Firebird users for whom a testing environment was not available, could experience errors related to the upgrade (see blog post). Please submit a support ticket to pace-support@oit.gatech.edu for assistance if you encounter any issues.

Our next maintenance period is tentatively scheduled to begin at 6:00 A on Wednesday, May 11, 2022, and conclude by 11:59 PM on Friday, May 13, 2022. Additional maintenance periods are tentatively scheduled for August 10-12 and November 2-4.

The following tasks were part of this maintenance period:

ITEMS REQUIRING USER ACTION:

  • [Complete on most nodes][System] Phoenix, Hive and Firebird clusters’ operating system will be upgraded to RHEL7.9.

ITEMS NOT REQUIRING USER ACTION:

  • [Deferred][Datacenter] Databank will repair/replace the DCR, requiring that all PACE compute nodes be powered off.
  • [Complete][Storage/Hive] Upgrade GPFS controller firmware
  • [Complete][Storage/Phoenix] Reintegrate storage previously borrowed for scratch into project storage
  • [Complete][Storage/Phoenix] Replace redundant storage controller and cables
  • [Complete][System] System configuration management updates
  • [Complete][Network] Upgrade IB switch firmware

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

Best,

The PACE Team

 

Join us today for GT’s Virtual ARC Symposium & Poster Session @ SC21 that’s on Wednesday (11/17) 6:00pm – 8:00pm

Posted by on Wednesday, 17 November, 2021

This is a friendly reminder that the ARC Symposium and Poster Session is today from 6:00pm – 8:00pm (EST).  Join us for this exciting virtual event that will feature invited talks plus more than 20! poster presenters whom will highlight GT’s efforts in research computing, so relax for the evening and engage with our community and guests as we have a number joining from outside GT that includes Microsoft, AMD, Columbia, UCAR, to name a few…   Hope you can join us.

Links to Join the Event:

To join the ARC Symposium invited talks session (6:00 – 7:00pm EST), please use the BlueJeans link below: https://primetime.bluejeans.com/a2m/live-event/jxzvgwub

To join the ARC Symposium poster session (7:00pm – 8:15pm EST), use the following link:
https://gtsc21.event.gatherly.io/

ARC Symposium Agenda:

5:45 PM EST – Floor Opens

6:00 PM EST – Opening Remarks and Welcome 

Prof. Srinivas Aluru, Executive Director of IDEaS

6:05 PM EST –  “Exploring the Cosmic Graveyard with LIGO and Advanced Research Computing”

Prof. Laura Cadonati, Associate Dean for Research, College of Sciences

6:25 PM EST – “Life after Moore’s Law: HPC is Dead, Long Live HPC!”

Prof. Rich Vuduc, Director of CRNCH

6:45 PM EST –  “PACE Update on Advanced Research Computing at Georgia Tech”

Pam Buffington, Interim Associate Director of Research Cyberinfrastructure, PACE/OIT and Director Faculty & External Engagement, Center for 21st Century University

7:00PM EST – Poster Session Opens (more than 20 poster presenters!!)

8:15PM EST – Event Closes

[Complete – PACE Maintenance Period: November 3 – 5, 2021] PACE Clusters Ready for Research!

Posted by on Friday, 5 November, 2021

Dear PACE researchers,

Our scheduled maintenance has completed ahead of schedule! All PACE clusters, including Phoenix, Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard, are ready for research. As usual, we have released all users jobs that were held by the scheduler. We appreciate everyone’s patience as we worked through these maintenance activities.

Our next maintenance period is tentatively scheduled to begin at 6:00AM on Wednesday, February 9, 2022, and conclude by 11:59PM on Friday, February 11, 2022. We have also tentatively scheduled the remaining maintenance periods for 2022 for May 11-13, August 10-12, and November 2-4.

The following tasks were part of this maintenance period:

ITEMS REQUIRING USER ACTION:

  • [Complete] TensorFlow upgrade due to security vulnerability. PACE will retire older versions of TensorFlow, and researchers should shift to using the new module. We also request that you replace any self-installed TensorFlow packages. Additional details are available on our blog.

ITEMS NOT REQUIRING USER ACTION:

  • [Complete][Datacenter] Databank will clean the water cooling tower, requiring that all PACE compute nodes be powered off.
  • [Complete][System] Operating system patch installs
  • [Complete][Storage/Phoenix] Lustre controller firmware and other upgrades
  • [Complete][Storage/Phoenix] Lustre scratch upgrade and expansion
  • [Postponed][Storage] Hive GPFS storage upgrade
  • [Complete][System] System configuration management updates
  • [Complete][System] Updates to NVIDIA drivers and libraries
  • [Complete][System] Upgrade some PACE infrastructure nodes to RHEL 7.9
  • [Complete][System] Reorder group file
  • [Complete][Headnode/ICE] Configure c-group controls on COC-ICE and PACE-ICE headnodes
  • [Complete][Scheduler/Hive] separate Torque & Moab servers to improve scheduler reliability
  • [Complete][Network] update ethernet switch firmware
  • [Complete][Network] update IP addresses of switches in BCDC

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu. You may read this message and prior updates related to this maintenance period on our blog.

Best,

-The PACE Team

 

Announcing the PACE OSG Orientation Class

Posted by on Thursday, 7 October, 2021

Dear PACE Researchers, 

PACE is pleased to announce the launch of the PACE Open Science Grid (OSG) Orientation class that introduces Georgia Tech’s research community to OSG and the distributed high throughput computing resources that are available via OSG Connect.   Join us for this virtual orientation to learn about OSG and how it may benefit your research needs. 

Please see below the dates for the sessions and the registration form: 

Dates and times:  October 15, 10:30am – 12:15pm 

                               November 11, 1:30pm – 3:15pm 

Registration:         https://b.gatech.edu/3Bi4Yie 

This class is based in part on the work supported by the NSF CC* award 1925541: “Integrating Georgia Tech into the Open Science Grid for Multi-Messenger Astrophysics”. With this award, PACE, in collaboration with Center for Relativistic Astrophysics, added CPU/GPU/Storage to the existing OSG capacity, as well as the first regional StashCache service that benefits all OSG institutions in the Southeast region, not just Georgia Tech.  

This orientation is the first step into PACE’s longer-term plans to support OSG initiatives on campus. Please be on the lookout for more exciting announcements from our team in the very near future. 

We look forward to you joining us for the OSG orientation. 

Best,

The PACE Team

Hive and Phoenix Scheduler Configuration Change

Posted by on Wednesday, 22 September, 2021

Dear PACE Researchers, 

We would like to announce an upcoming change to the scheduler configuration on the Phoenix and Hive clusters at 9:00 AM on Thursday, September 23rd. This change should improve the scheduler performance given the large number of jobs executed by our users. 

What will PACE be doing: PACE will reduce the retention time for job-specific logs from 24 hours to 6 hours after job completion.  Reducing the amount of job information the scheduler needs to process regularly should provide a more stable and faster job submission environment. Additionally, the downtime associated with scheduler restarts should improve, as job ingestion time will be reduced accordingly.  

Who does this message impact: Any user who attempts to use qstat for a job more than 6 hours after completion will be unable to do so moving forward. In addition to the scheduler job STDOUT/STDERR files, job statistics for completed jobs on Phoenix and Hive can be queried at https://pbstools-coda.pace.gatech.edu. 

What PACE will continue to do: We will monitor the clusters for issues during and after the configuration change to assess any immediate impacts from the update. We will continue to assess the scheduler health to ensure a stable job submission environment. 

As always, please contact us at pace-support@oit.gatech.edu with any questions or concerns regarding this change. 

Best Regards, 
The PACE Team

[Complete] PACE Maintenance Period (November 3 – 5, 2021)

Posted by on Monday, 13 September, 2021

[Complete 11/5/21 3:15 PM]

Our scheduled maintenance has completed ahead of schedule! All PACE clusters, including Phoenix, Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard, are ready for research. As usual, we have released all users jobs that were held by the scheduler. We appreciate everyone’s patience as we worked through these maintenance activities.

Our next maintenance period is tentatively scheduled to begin at 6:00AM on Wednesday, February 9, 2022, and conclude by 11:59PM on Friday, February 11, 2022. We have also tentatively scheduled the remaining maintenance periods for 2022 for May 11-13, August 10-12, and November 2-4.

The following tasks were part of this maintenance period:
ITEMS REQUIRING USER ACTION:
• [Complete] TensorFlow upgrade due to security vulnerability. PACE will retire older versions of TensorFlow, and researchers should shift to using the new module. We also request that you replace any self-installed TensorFlow packages. Additional details are available on our blog.

ITEMS NOT REQUIRING USER ACTION:
• [Complete][Datacenter] Databank will clean the water cooling tower, requiring that all PACE compute nodes be powered off.
• [Complete][System] Operating system patch installs
• [Complete][Storage/Phoenix] Lustre controller firmware and other upgrades
• [Complete][Storage/Phoenix] Lustre scratch upgrade and expansion
• [Postponed][Storage] Hive GPFS storage upgrade
• [Complete][System] System configuration management updates
• [Complete][System] Updates to NVIDIA drivers and libraries
• [Complete][System] Upgrade some PACE infrastructure nodes to RHEL 7.9
• [Complete][System] Reorder group file
• [Complete][Headnode/ICE] Configure c-group controls on COC-ICE and PACE-ICE headnodes
• [Complete][Scheduler/Hive] separate Torque & Moab servers to improve scheduler reliability
• [Complete][Network] update ethernet switch firmware
• [Complete][Network] update IP addresses of switches in BCDC

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

 

[Update 11/1/21 2:00 PM]

C-group controls will be configured on the login nodes for both COC-ICE and PACE-ICE during the maintenance period this week. This should help mitigate overuse of the login node by students running heavy computations, which has slowed the node for others.

Please use compute nodes for all computational work and avoid resource-intensive processes on the login nodes. Students who need an interactive environment are requested to submit an interactive job. Students who are uncertain about how to use ICE schedulers to work on compute nodes should contact their course’s instructor or TA for assistance. They can help you with workflows on the cluster. PACE will stop processes that overuse the login nodes, in order to restore functionality for all students.

Thank you for your efforts to ensure ICE clusters are an available resource for all students in participating courses.

[Reminder 10/26/21 4:30 PM]

Additional details and instructions for the TensorFlow upgrade are available in another blog post.

[Full announcement 10/20/21 10:30 AM]

As previously announced, our next PACE maintenance period is scheduled to begin at 6:00 AM on Wednesday, November 3, and end at 11:59 PM on Friday, November 5. As usual, jobs that request durations that would extend into the maintenance period will be held by the scheduler to run after maintenance is complete. During the maintenance window, access to all PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard.

Please see below for a tentative list of activities:

ITEMS REQUIRING USER ACTION:

  • TensorFlow upgrade due to security vulnerability. PACE will retire older versions of TensorFlow, and researchers should shift to using the new module. We also request that you replace any self-installed TensorFlow packages. Additional details and instructions will follow in a separate message.

ITEMS NOT REQUIRING USER ACTION:

  • [Datacenter] Databank will clean the water cooling tower, requiring that all PACE compute nodes be powered off.
  • [System] Operating system patch installs
  • [Storage/Phoenix] Lustre controller firmware and other upgrades
  • [Storage/Phoenix] Lustre scratch upgrade and expansion
  • [System] System configuration management updates
  • [System] Updates to NVIDIA drivers and libraries
  • [System] Upgrade some PACE infrastructure nodes to RHEL 7.9
  • [System] Reorder group file
  • [Headnode/COC-ICE] Configure c-group controls on COC-ICE headnode
  • [Scheduler/Hive] separate Torque & Moab servers to improve scheduler reliability
  • [Network] update ethernet switch firmware
  • [Network] update IP addresses of switches in BCDC

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

 

[Early announcement]

Dear PACE Users,

This is a friendly reminder that our next Maintenance period is tentatively scheduled to begin at 6:00AM on Wednesday, 11/03/2021, and it is tentatively scheduled to conclude by 11:59PM on Friday, 11/05/2021. As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the Maintenance Period by the scheduler. During the Maintenance Period, access to all the PACE managed computational and storage resources will be unavailable.

As we get closer to the Maintenance Period, we will communicate the list of activities to be completed and update this blog post.

If you have any questions or concerns, please do not hesitate to contact us at pace-support@oit.gatech.edu.

Best,

The PACE Team

[Complete] PACE is transitioning from current ticketing system FootPrints to ServiceNow

Posted by on Wednesday, 1 September, 2021

[Update – September 3]

Dear PACE Users,

PACE has successfully transitioned to ServiceNow, and we have begun receiving user tickets as expected in ServiceNow.

As previously mentioned, you may continue to use the pace-support@oit.gatech.edu email to reach out to PACE support, and for your reference, the following three links listed below are direct links to the ServiceNow forms that you may use going forward to request for help, request new software for the PACE Apps software repository, and request access to ICE cluster.

PACE team will continue to work on the remaining support requests that are in FootPrints system.  Thank you all for your attention and patience through this transition.

If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu 

Best, 

The PACE Team 

 

[Original Message – September 1]

Dear PACE Users,  

We are reaching out to inform you that PACE is transitioning from our current ticketing system FootPrints to ServiceNow. 

What’s happening and what we are doing:   PACE team is transitioning from current ticketing system, FootPrints, to ServiceNow. From September 3, all new PACE support requests will be processed in ServiceNow.  PACE will continue to work on any existing support requests that are in FootPrints.  As part of this transition, we have created two new request forms that replace our existing Software Request Form and PACE ICE Instructional Cluster Request Form.  

How does this impact me: Overall, the transition is seamless to the users for most cases with the exception of the links to our software and ICE request forms that are changing. On Friday, September 3rd, PACE support email address, pace-support@oit.gatech.edu, will redirect users’ emails/requests to ServiceNow, and the new software and ICE request form links will be available on our website. Please use those new forms if you would like to request new software for the PACE Apps software repository or if you are a course instructor interested in using PACE-ICE for your students.  Users who submitted ticket requests via FootPrints directly may use ServiceNow at https://services.gatech.edu (navigate to “Technology” & then “PACE” tile) and submit their request from the available forms.   

The following direct links to ServiceNow forms will be live and available to users on September 3: 

What we will continue to do:   We will continue to work on the existing tickets that are in FootPrints, and you may check the status of this transition on this blog post.   

If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu 

Best, 

The PACE Team