Slurm memory handling incorrect for fat nodes closed

It seems slurm (our job scheduler) has changed its behaviour so allocating a fat node through the -C fat or mem256Gb features will not give you access to the extra memory available on the node you wait to run on.

We’re investigating and will implement a fix as soon as possible.

Final ticket report

We have made some changes to our configuration and it seems we have managed to allow the traditional way (-C mem256GB -p node) to work once more.

Update 2018-06-25 16:16

Some testing suggests one can get jobs to run properly by explicitly requesting the correct amount of memory (like –mem=250G).