Rackham is now available
We are happy to announce that UPPMAX's cluster Rackham is now available!
Rackham is now available for all local projects with names starting with "snic2017", and for all new course projects.
First of March we will decommission Tintin, and begin to move all Tintin projects to Rackham. This migration will probably be finished within a few days.
Rackham consists of four login nodes and 304 compute nodes. Each compute node contains two 10-core Intel Xeon CPUs together with 128GB ("thin") or 256GB (fat") of memory. Your project data will be stored on Crex, Rackham's storage system, currently capable of storing 1PB of data. Crex is a high performance file storage system from DDN that uses the Lustre filesystem.
If you are used to Tintin, we ask you to pay attention to the following
* More nodes!
Rackham has 304 nodes (with more on its way!) while Tintin
in the end only had 150 nodes. Do not however assume that Rackham's nodes are identical to Tintin's, they're not. You will find that on Rackham fewer nodes are needed to perform the same work and you will need to adjust your job scripts accordingly.
* More cores!
Rackham has 20 cores per node, a 25% increase from Tintin's 16 cores per node. Remember that when scheduling your node jobs! For the tech interested users: Each Rackham core is an Intel Xeon E5-2630 v4 2.2GHz CPU with 25MB shared memory and a maximum turbo frequency of 3.1 GHz. If the previous sentence means nothing to you don't worry - the only thing you'll need to know is that your core jobs will finish much faster due to the newer generation of CPUs.
An important mention is that if you've built your own applications tailored for Tintin's AMD Bulldozer CPUs you will need to recompile on Rackham to take advantage of the Intel CPUs. Tip: Try compiling using the Intel compilers and tools from "module load intel" and you will likely see a jump in performance. Remember, faster code equals less compute time and less billing of your project core hours.
* More memory!
Each node comes with 128GB of memory (or 6.4GB per core) vs. Tintin's 64GB (or 4GB per core). For the most memory intensive applications you may also request up to 32 fat nodes each containing 256GB of memory.
The biggest differences you will find having your project directory on Crex instead of Pica is:
* No .snapshot directory. If you lost a file you need to contact email@example.com and we will reach into the backups. The .snapshot directory as previously found inside any directory of your project is no longer supported (for you home directory, the .snapshot is still available.)
* Smaller initial storage for your project data. The default size of the project and nobackup areas will be 128GB in total. Applying for more data if needed will be possible.
* We no longer support Webexport.
For Fysast1, Milou, and Tintin, UPPMAX provides a webexport service:
The service is based on some storage space on Pica, that will not be available on Rackham. Rackham has no available space for the webexport service, so it will not be provided.
Lastly, how do you get access to Rackham? If you already have a project on Tintin, UPPMAX will migrate it to Rackham in the beginning of March.
If you don't have a Tintin project and are interested in working on Rackham and Crex, you may apply for a SNIC-project on https://supr.snic.se/round/2017smalluppmax/.
* A note on Software
For a complete list of currently installed software please run after logging in:
As on Milou, you can search for modules with the "module spider" command:
module spider name-of-software
The list of available software will be updated in the coming weeks. At this time we have most of the compilers (icc, mpicc, gcc, gfortan and javac) and interpreters (Python, Perl, R) and software (MATLAB, GAUSSIAN, COMSOL, RStudio) installed. OpenFOAM, VASP and GROMACS have been scheduled for installment and will soon be available. If you are missing software and are unable to install it yourself, you may ask for support at firstname.lastname@example.org.
We look forward hearing your thoughts and feedback on Rackham!
Singularity is available
Urgent kernel upgrade -- FINISHED
Today we are performing an urgent kernel upgrade on Milou, Fysast1, Rackham, Irma, and Bianca. Login nodes will be restarted during the day. No running jorbs or queues are stopped. We will update on the progress here in System News during the day.
UPDATE 16:00 - Update completed.
Intelmpi performance issues
Bianca graphical login now working
Uses Thinlinc Web Access. Not X-forwardning.
Bianca's storage system Castor has problems -- FIXED
Maintenance window Wednesday 2017-06-07 -- FINISHED
Issues with X11 on milou (X11Forwarding) -- SOLVED
We have observed and several users have reported issues with running X11 applications on Milou. We are investigating it.
milou2 and milou-b rebooted
The login nodes milou2.uppmax.uu.se and milou-b.uppmax.uu.se were rebooted 15:00 today (29th of May) due to some issues with the kernel NFS module.
Cooling stop at 17.00 hours the 23rd of May -- CANCELLED
Issues with certain project volumes for milou/pica 20170515 and onwards.
Some project volumes on pica are very heavily loaded and slow/next to unusable for interactive use. We're doing what we can to resolve this but can not promise any set time for when things will behave as normal again.
UPDATE: We've had some continuing issues with this due to some nodes not realizing when resources behave better, we're working on these issues but this may have caused disturbances like failed jobs or missing output.
Support may be slow May 11th and 12th due to conference
The UPPMAX system group hosts the spring 'SONC' conference where administrators from all SNIC-centers meet and discuss how to improve our centers. With many UPPMAX adminstrators being out of office during the conference (Thursday 11th and Friday 12th) the support will likely be less responsive.
slurm disturbance on milou 2017-05-10
Due to a misconfiguration active on a certain number of nodes around 12AM today, some jobs that were launched on milou could not start.
If you have jobs that were victims of this, they will likely show up as completed although with a very short run time (a few seconds).
Disturbances in Slurm today Tuesday -- finished
Maintenance window Wednesday 2017-05-03 -- finished
Slurm problems on Rackham -- fixed
Intel license server not responding --fixed
We have gotten reports that the Intel license server is not responding. We are investigating it. This might manifest itself with hangs or freezes during compilations.
Problem "Invalid account or account/partition..." --solved
We have identified a problem with the Slurm account database. If you just got added or created a new project you might get the following message when scheduling jobs "Invalid account or account/partition...". It affects primarily Rackham and Milou.
Problem with Slurm on Milou -- fixed
Interrupts in Slurm service on Rackham -- fixed
Bianca's storage system Castor has problems -- fixed
Resetting your password from the homepage is not working --fixed
Resetting your password from this page is currently not working. If you need to reset your password please contact email@example.com
Update 2017-04-18: This issue should now be fixed.
Funk-accounts and new certificates
Some of the shared funk-accounts used on Irma and Milou might stop working due to the IP-address change.
Maintenance window Wednesday 2017-04-05 -- finished
Smog will be decommissioned on Wednesday 5th of April
Smog will be decommissioned on Wednesday 5th of April. As previously mentioned the SNIC Cloud Team is currently working on bringing up a new cloud to replace Smog and join the other two regions in the SNIC Science Cloud project.
For questions ,please contact firstname.lastname@example.org (and not the UPPMAX support queues).
Rackham2, one of Rackham's login nodes, got into problems -- now fixed
Maintenance window for Bianca Wednesday 2017-03-22 -- finished