PACE A Partnership for an Advanced Computing Environment

July 17, 2013

PACE maintenance – complete

Filed under: tech support — admin @ 7:21 am

We’ve finished.  Feel free to login and compute.  Previously submitted jobs are running in the queues.  As always, if you see odd issues, please send a note to pace-support@oit.gatech.edu.

We were able to complete our transition to the database-driven configuration, and apply the Panasas code upgrade.  Some of you will be seeing warning messages stemming from your utilization of the scratch space.  Please remember that this is a shared, and limited, resource.  The RHEL5 side of the FoRCE cluster was also retired, and reincorporated into the RHEL6 side.

We were able to achieve some of the network redundancy work, but this took substantially longer than planned and we didn’t get as far as we would have liked.  We’ll complete this during future maintenance window(s).

We spent a lot of time today trying to address the storage problems, but time was just to short to fully implement.  We were able to do some work to address the storage for the virtual machine infrastructure (you’ll notice this as the head/login nodes).  Over the next days and weeks, we will work on a robust way to deploy these updates to our storage servers and come up with a more feasible implementation schedule.

Some of the less time consuming items we also accomplished was to increase the amount of memory the Infiniband cards were able to allocate.  This should help those of you with codes that send very large messages.  We also increased the size of the /nv/pz2 filesystem – those of you on the Athena cluster, that filesystem is now nearly 150TB.  We found some Infiniband cards that had outdated firmware and brought those into line with what is in use elsewhere in PACE.  We also added a significant amount of capacity to one of our backup servers, added some redundant links into our Infiniband fabric and added some additional 10-gigabit ports for our growing server & storage infrastructure.

In all of this, we have been reminded that PACE has grown quite a lot over the last few years – from only a few thousand cores, to upwards of 25,000.  As we’ve grown, it’s become more difficult to complete our maintenance in four days a year.  Part of our post-mortem discussions will be around ways we can more efficiently use our maintenance time, and possibly increasing the amount of scheduled downtime.  If you have thoughts along these lines, I’d really appreciate hearing from you.

Thanks,

Neil Bright

July 11, 2013

Filed under: tech support — admin @ 9:42 pm

Hi folks,

 

Just a quick reminder here of our maintenance activities coming up on Tuesday of next week.  All PACE managed clusters will be down for the day.  For further details, please see our blog post here.

 

Thanks!

Neil Bright

July 3, 2013

PACE maintenance day – July 16

Filed under: tech support — admin @ 11:51 pm

Dear PACE cluster users,

The time has come again for our quarterly maintenance day, and we would like to remind you that all systems will be powered off starting at 6:00am on Tuesday, July 16, and will be down for the entire day.

None of your jobs will be killed, because the job scheduler knows about the planned downtime, and does not start any jobs that would be still running by then. You might like to check the walltimes for the jobs you will be submitting and modify them accordingly so they will complete sometime before the maintenance day, if possible. Submitting jobs with longer walltimes is still OK, but they will be held by the scheduler and released right after the maintenance day.

We have many tasks to complete, here are the highlights:

  1. transition to a new method of managing our configuration files – We’ve referred to this in the past as ‘database-based configuration makers’. We’ve been doing a lot of testing on this the last few months and have things ready to go. I don’t expect this to cause any visible change to your experience, just give us a greater capability to manage more and more equipment.
  2. network redundancy – we’re beefing up our ethernet network core for compute nodes. Again, not an item I expect to be a change to your experience, just improvements to the infrastructure.
  3. Panasas code upgrade – This work will complete the series of bug fixes from Panasas, and all us to reinstate the quotas on scratch space. We’ve been testing this code for many weeks and have not observed any detrimental behavior. This is potentially a visible change to you. We will reinstate the 10TB soft and 20TB hard quotas. If you are using more than 20TB of our 215TB scratch space, you will not be able to add additional files or modify existing files in scratch.
  4. decommissioning of the RHEL5 version of the FoRCE cluster – This will allow us to add 240 CPU cores to the RHEL6 side of the FoRCE cluster, pushing force-6 over 2,000 CPU cores. We’ve been dwindling this resource for some time now, this just finishes it off. Users with access to FoRCE currently have access to both RHEL5 and RHEL6 sides, access to RHEL6 via the force-6 head node will not change as part of this process.

As always, please contact us via pace-support@oit.gatech.edu for any questions/concerns you may have.

July 1, 2013

Login Node Storage Server Problems

Filed under: Uncategorized — Semir Sarajlic @ 11:50 am

Last night (2013/06/30), one of the storage servers that is responsible for many of the cluster login nodes encountered some major problems.
These issues are preventing the login nodes from allowing any user to login or use the server.
Following is a list of the affected login nodes:
cee
chemprot
cns
cygnus-6
force-6
force
math
mokeys
optimus
testflight-6

We are aware of the problem and we are working as quickly as possible to fix this.
Please let us know of any problems you are having that may be related to this.
We will keep you posted about our progress.

Powered by WordPress