GT Home : : Campus Maps : : GT Directory

Archive for August, 2013

COMSOL Workshop in Atlanta (9/10)

Posted by on Monday, 26 August, 2013

Here’a note from Siva Hariharan, COMSOL Inc., which we thought you might be interested in:

You’re invited to a free workshop focusing on the simulation capabilities of COMSOL Multiphysics. Two identical workshops will take place on Tuesday, September 10th in Atlanta, GA. There will be one AM session and one PM session. All attendees will receive a free two-week trial of the software.

During the workshop you will:

– Learn the fundamental modeling steps in COMSOL Multiphysics

– Experience a live multiphysics simulation example

– Set up and solve a simulation through a hands-on exercise

– Learn about the capabilities of COMSOL within your application area

 

Programs:

AM Session

9:30am – 10:45am An Overview of the Software

10:45am – 11:00am Coffee Break

11:00am – 12:30pm Hands-on Tutorial

 

PM Session

1:30pm – 2:45pm An Overview of the Software

2:45pm – 3:00pm Coffee Break

3:00pm – 4:30pm Hands-on Tutorial

 

Event details and registration: http://comsol.com/c/tt1

 

Seating is limited, so advance registration is recommended. 

Feel free to contact me with any questions.

 

Best regards,

Siva Hariharan

COMSOL, Inc.
1 New England Executive Park
Suite 350
Burlington, MA 01803
781-273-3322
siva@comsol.com

PB1 bad news, good news

Posted by on Thursday, 8 August, 2013

This is not a repeat from yesterday. Well, it is, just a different server :-)

UPDATE 2013-08-08 2:23pm

/pb1 is now online, and should not fall over under heavy loads any more.

Have at it folks. Sorry it has taken this long to get to the final
resolution of this problem.

—- Earlier Post —-
Bad news:

If you haven’t been able to tell, the /pb1 filesystem has failed again.

Good news:

We’ve been working on a new load for the OS for all storage boxes
which we had hoped to get out on last maintenance day (July 17), but
ran out of time to verify whether it was

  • deployable
  • resolved the actual issue

Memo (Mehmet Belgin) greatly assisted me is testing this issue by finding some of the cases we’ve known to cause failures and replicating them against our test installs. Many loads were broken confirming our suspicions, and also confirming our new image. It will take heavy loads a LOT better than before.

With verification done, we have been planning to have all Solaris based storage switched to this by the end
of the next maintenance day (October 15).

However, due to need, this will be going on the PB1 fileserver is just a little bit. We’ve
verified the process of how to do this without impacting any data
stored on the server, so we anticipate having this fileserver back up
and running at 2:30PM, and the bugs which have been causing this
problem since April will have been removed.

I’ll follow up with progress messages.

PC1 bad news, good news

Posted by on Wednesday, 7 August, 2013

UPDATE: 2013-08-07, 13:34 –

BEST NEWS OF ALL: /pc1 is now online, and should not fall over under heavy loads anymore.

Have at it folks. Sorry it has taken this long to get to the final
resolution of this problem.

Earlier Status:
Bad news:

If you haven’t been able to tell, the /pc1 filesystem has failed again.

Good news:

We’ve been working on a new load for the OS for all storage boxes
which we had hoped to get out on last maintenance day (July 17), but
ran out of time to verify whether it was

  • deployable
  • resolved the actual issue

Memo (Mehmet Belgin) greatly assisted me is testing this issue by finding some of the cases we’ve known to cause failures and replicating them against our test installs. Many loads were broken confirming our suspicions, and also confirming our new image. It will take heavy loads a LOT better than before.

With verification done, we have been planning to have all Solaris based storage switched to this by the end
of the next maintenance day (October 15).

However, due to need, this will be going on the PC1 fileserver is just a little bit. We’ve
verified the process of how to do this without impacting any data
stored on the server, so we anticipate having this fileserver back up
and running at 1:30pm, and the bugs which have been causing this
problem since April will have been removed.

I’ll follow up with progress messages.

Head node problems

Posted by on Friday, 2 August, 2013

Head nodes to many PACE clusters are currently down due to problems with our virtual machines.  This should not affect running jobs, but users are unable to login.  PACE staff are actively working to restore services as soon as possible.

The head nodes affected are:

  • apurimac-6 – BACK ONLINE 2013/08/03 00:30
  • aryabhata-6 – BACK ONLINE 2013/08/03 00:30
  • ase1-6 – BACK ONLINE 2013/08/03 03:10
  • athena – BACK ONLINE 2013/08/03 04:45
  • atlantis – BACK ONLINE 2013/08/03 04:45
  • atlas-6 – BACK ONLINE 2013/08/03 00:40
  • cee – BACK ONLINE 2013/08/03 01:40
  • chemprot – BACK ONLINE 2013/08/03 01:40
  • complexity – BACK ONLINE 2013/08/03 01:40
  • critcel – BACK ONLINE 2013/08/03 02:00
  • ece – BACK ONLINE 2013/08/03 02:00
  • emory-6 – BACK ONLINE 2013/08/03 02:20
  • faceoff – BACK ONLINE 2013/08/03 03:10
  • granulous – BACK ONLINE 2013/08/03 03:10
  • isabella – BACK ONLINE 2013/08/03 03:10
  • kian – BACK ONLINE 2013/08/03 03:10
  • math – BACK ONLINE 2013/08/03 03:10
  • megatron – BACK ONLINE 2013/08/03 03:10
  • microcluster – BACK ONLINE 2013/08/03 03:10
  • optimus-6 – BACK ONLINE 2013/08/03 00:30
  • testflight-6 – BACK ONLINE 2013/08/03 03:10
  • uranus-6 – BACK ONLINE 2013/08/03 00:30

The following nodes will likely generate SSH key errors upon
connection, as the key saving processes had not run on them. Please
edit your ~/.ssh/known_hosts file (Linux/Mac/Unix) and remove any host
entry with these names and save the new keys.

  • ase1-6
  • chemprot
  • faceoff
  • kian
  • microcluster

Additionally, the following user-facing web services are also offline:

  • galaxy