Maintenance window Wednesday 2017-01-04 -- FINISHED
We start at 0900 hours.
This time we will:
- Upgrade Slurm and other system software on Milou, Fysast1 and Tintin
- Upgrade firmware on Milou and Fysast1
The firmware upgrade requires power cycling so Slurm queues are stopped. Queued jobs will start after the maintenance.
Login nodes on Fysast1, Milou, and Tintin will be rebooted once during the day (we will warn an hour ahead). Slurm commands. like sbatch and jobinfo will not be available all the time.
We will not stop Slurm queues on Tintin, Irma and Bianca. Maintenance on Irma and Lupus will be done next week, January 11th.
This page will be updated during the maintenance, to keep you informed about our progress.
We plan to finish before evening (today, Wednesday).
Update at 1120 hours
Maintenance work continues.
Slurm has already been upgraded. Login nodes will probably be restarted at 1300 hours.
Update at 14:20 hours
Login nodes are upgraded and have restarted successfully.
Firmware upgrade continues.
Update at 17:00 hours
Most nodes on Milou and Fysast1 are successfully upgraded and back in production. The remaining nodes will be released later, when their upgrades are completed.
Update on Thursday at 1020 hurs
We need to make a second change to the Slurm installation, meaning that Slurm commands will not be available all the time. Jobs will keep running.
Now, we guess that we are finished with the maintenance sometime during today's afternoon.
Update on Thursday at 1530 hours
Still working with the Slurm upgrade. Still thinking that we will finish before evening.
Update on Thursday at 1640 hours
Slurm has been upgraded on Tintin, Milou and Fysast1. Nodes are still rebooting and are planned to be back in production within 2 hours.
Firmware upgrade failed on 4 (out of 26) chassis. This has been reported to the manufacturer for further troubleshooting. As a consequence, 32 Milou nodes are out of production, until this is solved. For the remaining chassis, the firmware upgrade was successful.
Old System News
milou2 rebooted August 28
milou2 rebooted on Monday 2017-08-28 at 19:51
milou2 rebooted August 19
milou2 rebooted on Saturday 2017-08-19.
Intelmpi performance issues.
Issues with X11 on milou (X11Forwarding) -- SOLVED
We have observed and several users have reported issues with running X11 applications on Milou. We are investigating it.
milou2 and milou-b rebooted
The login nodes milou2.uppmax.uu.se and milou-b.uppmax.uu.se were rebooted 15:00 today (29th of May) due to some issues with the kernel NFS module.
Cooling stop at 17.00 hours the 23rd of May -- CANCELLED
Issues with certain project volumes for milou/pica 20170515 and onwards.
Some project volumes on pica are very heavily loaded and slow/next to unusable for interactive use. We're doing what we can to resolve this but can not promise any set time for when things will behave as normal again.
UPDATE: We've had some continuing issues with this due to some nodes not realizing when resources behave better, we're working on these issues but this may have caused disturbances like failed jobs or missing output.
Support may be slow May 11th and 12th due to conference
The UPPMAX system group hosts the spring 'SONC' conference where administrators from all SNIC-centers meet and discuss how to improve our centers. With many UPPMAX adminstrators being out of office during the conference (Thursday 11th and Friday 12th) the support will likely be less responsive.
slurm disturbance on milou 2017-05-10
Due to a misconfiguration active on a certain number of nodes around 12AM today, some jobs that were launched on milou could not start.
If you have jobs that were victims of this, they will likely show up as completed although with a very short run time (a few seconds).
Disturbances in Slurm today Tuesday -- finished
Maintenance window Wednesday 2017-05-03 -- finished
Slurm problems on Rackham -- fixed
Intel license server not responding --fixed
We have gotten reports that the Intel license server is not responding. We are investigating it. This might manifest itself with hangs or freezes during compilations.
Problem "Invalid account or account/partition..." --solved
We have identified a problem with the Slurm account database. If you just got added or created a new project you might get the following message when scheduling jobs "Invalid account or account/partition...". It affects primarily Rackham and Milou.
Problem with Slurm on Milou -- fixed
Interrupts in Slurm service on Rackham -- fixed
Bianca's storage system Castor has problems -- fixed
Resetting your password from the homepage is not working --fixed
Resetting your password from this page is currently not working. If you need to reset your password please contact firstname.lastname@example.org
Update 2017-04-18: This issue should now be fixed.
Funk-accounts and new certificates
Some of the shared funk-accounts used on Irma and Milou might stop working due to the IP-address change.
Maintenance window Wednesday 2017-04-05 -- finished
Smog will be decommissioned on Wednesday 5th of April
Smog will be decommissioned on Wednesday 5th of April. As previously mentioned the SNIC Cloud Team is currently working on bringing up a new cloud to replace Smog and join the other two regions in the SNIC Science Cloud project.
For questions ,please contact email@example.com (and not the UPPMAX support queues).
Rackham2, one of Rackham's login nodes, got into problems -- now fixed
Maintenance window for Bianca Wednesday 2017-03-22 -- finished
Problem with file permissions in certain projects
Poor performance using Intel MPI on Rackham
We have idenfied performance issues when using Intel MPI on Rackham. In some cases you see a 10x slowdown (or worse) using Intel MPI compared to Open MPI. We are investigating this issue and hope to have it solved soon. For now, please use Open MPI.
Fixed: "Project p123456 may not run jobs on this cluster (rackham)"
An issue exist on Rackham affecting projects of the form "p123456". The projects are not allowed to run due to the monthly core allocation incorrectly being set to 0 hours. We are investigating why this happens.
Update 2017-03-10: The issue should now be fixed.
Rackham is now open for all users
All active Tintin projects (exception Tintin-Fysast1, please see below) have been migrated to Rackham. All UPPMAX users should now have access to Rackham.
Rackham will soon be open for all users
Many Tintin users have missed that Rackham will replace Tintin. We are currently migrating all projects from Tintin to Rackham and when this is done, all users will get access to Rackham. We will announce this per email and on our homepage.
Maintenance window Wednesday 2017-03-01 -- finished
Today we decommission Tintin
1st of March 2017 is the day we decommission Tintin. It will be replaced by the Rackham cluster. All projects on Tintin will be moved to our new Rackham cluster.
Creation of new UPPMAX user accounts will be delayed
Delayed approval of Account Requests -- fixed
We have identified a problem with the UPPMAX Account Request which unfortunately causes some delay before you can login to UPPMAX. We hope to complete the registration this week. You do not have to resubmit your Accounts.
Problem sending in support tickets using firstname.lastname@example.org -- fixed
There is currently a problem sending in support tickets to email@example.com. We are investigating and hope to have it fixed soon.
Rackham is now available
We are happy to announce that UPPMAX's cluster Rackham is now available!
Downtime due to power outage
Milou, Tintin, and Fysast-1 are back in production. Bianca is back in test production. Still working on Smog.
Milou2 now back again
The degraded RAID now fixed
Milou-f rebooted tuesday afternoon
Lustre file system problem
Milou1 rebooted (tuesday 14:00)
Totally inresponsible. Lustre file system problem (wich will be decommissioned tomorrow....)
Milou1 rebooted -- now with limited number of inodes on /scratch (/tmp)
We have now quota on the number of files in /scratch (/tmp).
100000 is maximum (per user). If you need more you have to use a compute node.
Gulo (including glob directory) decommissioned January 18
Milou2 down for reinstallation 13:50 (now waiting for spare parts)
Milou2 hasn't worked well for a while. We will give him a fresch restart.
Milou1 rebooted Thursday am11.00
Milou1 rebooted Thursday am11.
Problems with lustre file system.
Fysast1 down Wednesday
Fysast1 down Wednesday before lunch.
One power supply broken and the fuses for half the cluster was blown.
Milou1 rebooted Wednesday am11.00
Milou1 rebooted Wednesday am11.00
Problems with lustre file system
Maintenance window on Irma Wednesday 2017-01-11 -- FINISHED
Maintnenace window on Mosler/topolino Wednesday Jan 11 - FINISHED
We have a maintenance window coming up on January 11 from 9:00. Due to physical work, we need to shut down the system during the maintenance window this time so jobs won't run.
We will also likely be required to rebuild virtual nodes and will probably lose information about queued jobs.
Update 21:10: Maintenance is now finished and the system should be available again.
Poor performance on Milou and Tintin
Maintenance window Wednesday 2017-01-04 -- FINISHED
Milou2 rebooted friday morning
Milou2 rebooted at 06:01 due to a problem with the lustre filesystem
Reduced staff availability over the coming holidays