GT Home : : Campus Maps : : GT Directory

Archive for category Uncategorized

PACE clusters ready for research

Posted by on Saturday, 10 February, 2018

Our February 2018 maintenance (http://blog.pace.gatech.edu/?p=6158) is complete ahead of schedule.  We have brought compute nodes online and released previously submitted jobs. Login nodes are accessible and data available. As usual, there are some straggling nodes we will address over the coming days.

Our next maintenance period is scheduled for Thursday, May 10 through Saturday, May 12, 2018.

Storage
– Both pace1 and pace2 GPFS systems now apply a limit of 2 Million files/directories per user. Please contact us if you have problems with creating new files or updating existing ones, or see messages saying that your quota is exceeded.
– We performed several maintenance tasks for both pace1 and pace2 systems to improve reliability and performance. This included rebalancing data on the drives as recommended by the vendor.
– Temporary links pointing to storage migrated in the previous maintenance window (November 2017) are now removed. All direct references to old paths will fail. We strongly recommend Math and ECE users (whose repositories are relocated as a part of storage migration) to run tests. Please let us know if you see ‘file not found’ type errors referencing the old paths staring with “/nv/…”
– Deletion of old copies of bio-konstantinidis and bio-soojinyi are currently pending, but we will start deletions sometime after the maintenance day.
– CNS users are are migrated to their new home and project directories.
Power
– We completed all power work as planned.
Rack/Node maintenance
– To rebalance power utilization, a few ASDL nodes are moved and renamed. Users of this cluster should not notice any differences other than hostnames.
– VM servers received a memory bump, allowing for more capacity
Network
– Recabling and reconfiguration of IB network is complete
– All planned Ethernet network improvements are complete
As always, please contact us  (pace-support@oit.gatech.edu) if you have notice any problems.

 

PACE quarterly maintenance – (Feb 8-10, 2018)

Posted by on Monday, 5 February, 2018

PACE maintenance activities are scheduled to start at 6am this Thursday (2/8) and may continue until Saturday (2/10). As usual, jobs with long walltimes are being held by the scheduler to prevent them from getting killed when we power off the systems. These jobs will be released as soon as the maintenance activities are complete.

Some of the planned improvements, new storage quotas in particular, require user action. Please read on for more details and action items.

Storage

* (Requires user action) The “2 Million files/directories per user” limitation on the GPFS system (as initially announced http://blog.pace.gatech.edu/?p=6103) will take effect on both pace1 and pace2 storages, which constitute almost all of the project space with the exception of ASDL cluster. We have been sending weekly reminders to users who are exceeding this limit since the November maintenance. If you have been receiving these notifications and haven’t reduced your usage yet, please contact pace-support urgently to prevent interruptions to your research.

* (Requires user action) As a last step to conclude the storage migration performed during November maintenance, PACE will remove the redirection links left at the old storage locations as a temporary precaution. The best way to tell whether your codes/scripts will be impacted is to test them on testflight cluster, which doesn’t have these links as described in http://blog.pace.gatech.edu/?p=6153 . If you find that your codes/scripts are working on testflight, then it means they will continue to work on any other PACE cluster after the links are removed.

We have been working with ECE and Math departments, which maintain their own software repositories, to ensure that the existing software will continue to run in the new locations. We have been strongly encouraging users of these repositories to run tests on the testflight cluster to identify potential problems. If you haven’t had a chance to try your codes yet, please try to do that until the maintenance day and contact pace-support urgently if you notice any problems.

* (Requires user action) The two storage locations that had been migrated between two GPFS systems, namely bio-konstantinidis and bio-soojinyi, will be deleted from the old (pace1) location. If you need any data from the old location, please contact pace-support urgently to retrieve them before the maintenance day.

* (May require user action) We will complete the migration of CNS cluster users to their new home (hcns1) and project storage (phy-grigoriev). We will replace the symbolic links (e.g. ~/data) accordingly to make this migration as transparent from the users as possible. If some of your codes/scripts include hardwired references to the old locations, they need to be updated with the new locations. We strongly recommend the use of available symbolic links such as “~/data” rather than absolute paths such as “/gpfs/pace2/project/pf1” to ensure that your codes/scripts will not be impacted by future changes we may need to make.

* (No user action needed) We will apply some maintenance (disk striping) on the pace1 GPFS system. We are also exploring a possibility to update some components in the pace2, but the final decision is waiting on the vendor recommendation. None of this work requires any user action.

Power Work

* (No user action needed) We will install new power distribution units (PDUs) and reconfigure some connections on some racks to achieve a better power distribution and increase redundancy.

Rack/Node maintenance

* (No user action needed) We will physically move some of the ASDL nodes to a different rack. While this requires renaming of those nodes, there will be no differences in the way users are submitting jobs via the scheduler. One exception is the unlikely scenario of users explicitly requesting nodes by their hostnames in PBS scripts.

* (No user action needed) We will increase the memory capacity on Virtual machine servers from 64GB to 256GB, which host most of the headnodes. The memory available per VMs, however, will not change.

Network

* (No user action needed) We will do some recabling and reconfiguration on the Infiniband (IB) network to achieve a more efficient connectivity, which will also allow its to retire an old switch.

* (No user action needed) We will install a new Ethernet switch and replace some others to optimize the network.

 Instructional Cluster

The instructional cluster (a.k.a PACE/COC ICE) will be offlined as a part of this maintenance. This is a brand new resource that’s not officially made available to any classes yet, but we noticed some logins by some users. Please refrain from using these resources for any classes yet, until we release it following a training session that we will schedule in the next week.

 

Please test your codes on Testflight if your storage had been migrated in November

Posted by on Wednesday, 24 January, 2018

As you would recall, our November 2017 maintenance included consolidation of multiple different filesystems into a single system (pace2) as announced here: http://blog.pace.gatech.edu/?p=6103. All of the files should be successfully migrated by now, with links replaced to point to the new location.

We created temporary links in old locations as a temporary measure to prevent immediate job crashes, as explained in the link above (please see the “What if I don’t fix existing references to the old locations after my data are migrated?” section). Our plan is to remove these temporary links as a part of the next maintenance day (Feb 8, 2018). If your codes/scripts are still referencing to the old locations, they will most certainly crash after that day.

We removed these temporary links on the testflight cluster (mimicking the environment you’d expect to see after the Feb maintenance) and strongly encourage you to try your codes/scripts there to ensure an eventless transition. Some of the locally compiled codes with hardcoded references to old references may require recompilation if they fail to run on testflight.

As always, please contact pace-support@oit.gatech.edu if you need any assistance.

[Resolved] All PACE nodes temporarily offline due to storage trouble

Posted by on Saturday, 30 December, 2017

Update (12/31/2017, 10:15am): We have addressed the issue and the majority of nodes started running jobs again. As far as we can tell, this was caused by a network related “event” that’s internal to the system. We are working with the vendor to identify the exact root cause.

Original post: One of the primary storage systems (pace2) went offline today, potentially impacting running jobs referencing to that system.

Our automated scripts offlined PACE nodes to prevent new jobs from starting. They will be online once the storage issues are addressed.

PACE team is currently investigating the problems and we will keep you updated.

We are sorry for the delays that may be caused due to the limited staff availability on holidays.

Systematic offlining of PACE nodes to address storage slowness

Posted by on Tuesday, 21 November, 2017

We identified a problem with the way some nodes are mounting our main (GPFS) storage server, causing slow storage performance. The fix requires restarting the storage services on affected nodes individually, when they are not running any jobs. For this reason, we started draining (offlining) all affected nodes and systematically bringing them back online as soon as their jobs are complete and the fix is applied.

This issue does not impact running jobs other than storage slowness, but you will notice offline nodes in your queues until we address all affected nodes.

It’s safe to continue submitting jobs and there is no risk of data loss.

We are sorry for this inconvenience and thank you for your cooperation.

PACE clusters ready for research

Posted by on Saturday, 4 November, 2017

Our November 2017 maintenance period is now complete, far ahead of schedule.  We have brought compute nodes online and released previously submitted jobs. Login nodes are accessible and data available. As usual, there are some straggling nodes we will address over the coming days.

Our next maintenance period is scheduled for Thursday, February 8 through Saturday, February 10, 2018.

Storage
– Nearly a petabyte of data was migrated to the new DDN/GPFS storage device.  While this will provide a more performant, expandable, and supportable storage platform, it requires changes to path names.  We have adjusted the symbolic links in home directories (e.g. ~/data) to point to the new locations, please continue to use these names wherever possible.  In order to minimize disruption, we have also put a temporary redirection in place so that the old names will continue to work.  We intend to remove this redirection during our next maintenance period, and will proactively identify and assist users using the deprecated path names.

Schedulers
– The nvidia-gpu and gpu-recent queues have been consolidated into a new force-gpu queue.  Please use the new queue name going forward.  PACE staff will proactively identify and assist users using the deprecated queue names.
– The semap-6 queue has been moved to an alternate scheduler server.  No user action is required.
– The Joe cluster has been moved into the shared partition.  These users now have access to idle cycles in the shared partition, and offer the idle cycles of their cluster for use by others.

ITAR / NIST800-171 environment
– planned tasks are complete, no user action is required.

Power and Network
– planned tasks are complete, no user action is required.

PACE quarterly maintenance – (Nov 2-4, 2017)

Posted by on Monday, 23 October, 2017

 

Dear PACE users,

PACE clusters and systems will be taken offline at 6am on Thursday, Nov 2 through the the end of Saturday (Nov 4). Jobs with long walltimes will be held by the scheduler to prevent them from getting killed when we power off the nodes. These jobs will be released as soon as the maintenance activities are complete.

Some of the planned improvements, storage migrations in particular, require attention of a large number of users. Please read on for more details and action items.

Storage (Requires user action)

PACE is retiring old NFS storage servers, which have been actively serving project directories for a large number of users. All of the data they contain will be consolidated into a new GPFS storage (pace2) purchased recently. GPFS is a high performance parallel filesystem, which offers improved reliability (and in many cases performance) compared to NFS.

Important: PACE will also start enforcing a 2 Million files/directories limit on this GPFS system, regardless of their size. We have identified the users who are currently using more than this limit and will contact them separately to prevent interruptions to research.

Here’s a full list of storage locations that will be migrated to ‘pace2′:

pg1 ,pc5, pe11, pe14, pe15, pe3, pe5, pe9, pe10, pe12, pe4, pe6, pe8, pa1, pbi1, pcc1, pcee2, pmse1, psur1, pc4, pase1, pmart1, pchpro1, pska1, pbiobot1, pf2, pggate1, ptml1,pc6, py1, py2, pc2, pz2, pe1, pe7, pe13, pe2, pb2, pface1, pas1, pf1, pb1, hp3, pj1, pb3, pc1, pz1, ps1, pec1, pma1

In addition to these NFS shares, we will also migrate these two filesystems from our current GPFS system (pace1) to the new GPFS system (pace2), due to limited space availability:

bio-konstantinidis, bio-soojinyi

 

How can I tell if  my project directory will be migrated?

Copy and run this command on the headnode:

find ~/data* -maxdepth 1 -type l -exec ls -ld {} \;

This command will return one or more lines similar to:

lrwxrwxrwx 1 root pace-admins 16 Jun 16 2015 /nv/hp16/username3/data -> /nv/pf2/username3
lrwxrwxrwx 1 root pace-admins 19 Jan 6 2017 /nv/hp16/username3/data2 -> /gpfs/pace1/project/pf1/username3

Please note the right hand side of the arrow “->”. If the arrow is pointing to a path starting with “/nv/…” and followed by a storage name included in the list provided above, then your data will be migrated. In this example, the location linked as “data” will be migrated (/nv/pf2/username3) but “data2″ will not (/gpfs/pace1/project/pf1/username3).

As an exception, all references to “bio-konstantinidis” and “bio-soojinyi” will be migrated, even tough their path starts with “/gpfs” and not “/nv”. E.g.:

lrwxrwxrwx 1 root bio-konstantinidis 43 Oct 11 2015 /nv/hp1/username3/data3 -> /gpfs/pace1/project/bio-konstantinidis/username3

What do I do if my project storage is being migrated?

No action needed for users who have been using the symbolic link names to access the storage (e.g. data, data2, etc.), because PACE will replace these links to point to the new locations.

If you have been referencing your storage using their absolute path (e.g. /nv/pf2/username), which is not recommended, then you need to replace all mentions of “/nv” with “/gpfs/pace1/project” in your codes, scripts and job submissions. E.g., “/nv/pf2/username3″ should be replaced as “/gpfs/pace1/project/pf2/username3″.

Users of bio-konstantinidis and bio-soojinyi should only need to replace “pace1″ with “pace2″. E.g., “/gpfs/pace1/project/bio-konstantinidis/username3″ should be replaced as “/gpfs/pace2/project/bio-konstantinidis/username3″.

NOTE: PACE strongly encourages all users to reference their project directories using their symbolic links (e.g. data, data2, …), rather than absolute paths, which are always subject to change. Doing so will minimize the user action needed when we make changes in the systems and configurations.

What if I don’t fix existing references to the old locations after my data are migrated?

PACE team will replace existing directories with links pointing to their new location to minimize user impact. This way, script/codes that are pointing to old paths can continue to run without needing any changes. However, this temporary failsafe measure will be in place for approximately 3 more months (until the next maintenance day). We strongly encourage all users to check if their data is being migrated, then fix their scripts/codes accordingly as needed, within this 3-months grace period. Please contact PACE team if you need any assistance with this process.

PACE team will also be monitoring jobs during this time period, and proactively reach out to users with jobs that are still using the old paths.

 

Schedulers (Requires some user action)

  • Consolidation of nvidia-gpu and gpu-recent queues on a new queue named “force-gpu”: This will require users of these queues to change the queue name to “force-gpu” in their submission scripts.
  • Clean up and improve PBSTools configurations and data
  • Migration of semap-6 queue to the dedicated-sched scheduler
  • [NEW] Migration of all joe queues on the shared-sched scheduler

ASDL / ITAR cluster (no user action needed)

These planned maintenance tasks are completely transparent to users:

  • Redistribute power connections to additional circuit in the rack
  • Replace CMOS batteries on compute nodes
  • Replace mother board on the file server, to use all the available memory slots

Power and network (no user action needed)

These planned maintenance tasks are completely transparent to users:

  • Update power distribution units on 2 racks
  • Move compute nodes to balance power utilization
  • Replace old, out of support switches
  • Update DNS appliances in Rich 116, 133 and BDCD
  • Increase redundancy to Infiniband connections between Rich 116 and 133

Campus preparedness and hurricane Irma

Posted by on Friday, 8 September, 2017

Greetings PACE community,

As hurricane Irma makes its way along the projected path through Florida and into Georgia, I’d like to let you know what PACE is doing to prepare.

OIT Operations will be closely monitoring the path of the storm and any impacts it might have on the functionality of computer rooms in the Rich Computer Center and our backup facility on Marietta Street. In the event that either of these facilities were to loose power, they will enact emergency procedures and react as best as possible.

What does this mean for PACE?

The room where we keep the compute nodes only has a few minutes of battery protected power. While this is plenty to ride through any momentary glitches in power, it only lasts a few minutes. In the event of a power loss, compute nodes will power down and terminate whatever jobs are running. The rooms where we keep our servers, storage and backups have additional generator power which can keep them running longer. This too is a finite resource. In the event of power loss, PACE will begin orderly shutdown of servers and storage in order to reduce the chance of data corruption or loss.

Bottom line is that our priority will be to protect the critical research data, and enable successful resumption of research once power is restored.

Where to get further updates?

Our primary communications channels remain our mailing list, pace-availability@lists.gatech.edu, and the PACE blog (http://blog.pace.gatech.edu). However, substantial portions of the IT infrastructure required for these to operate are also located in campus data centers. Additionally, OIT employs a cloud-based service to publish status updates. In the event that our blog is unreachable, please visit https://status.gatech.edu.

GPFS problem (resolved)

Posted by on Saturday, 2 September, 2017

This was much ado about nothing.  Running jobs continued to execute normally through this event, and no data was at risk.  What did happen is that jobs that could potentially have started were delayed.

A longer explanation –

We have monitoring agents that prevent jobs from starting if they detect a potential problem with the system.  The idea is to avoid starting a job if there’s a known reason that would cause a crash.  During our last maintenance period, we brought a new DDN storage system online and configured these agents to watch it for issues.  It did develop an issue, the monitoring agents flagged it and took nodes offline to new jobs.  However, we have yet to put any production workloads on this new storage so no running jobs were affected.

At the moment, we’re pushing out a change to the monitoring agents to ignore the new storage.  As this finishes rolling out, compute nodes will come online and resume normal processing.  We’re also working with DDN to address the issue on the new storage system.

GPFS Problem

Posted by on Friday, 1 September, 2017

We are actively debugging a GPFS storage problem on our systems that unfortunately brought many queues offline. We do not yet fully know the cause and solution, but will update as soon as possible.

We apologize for the inconvenience and are actively working on a solution.