During the November downtime, the BioHPC team added 32 new compute nodes to the cluster. These nodes (Nucleus 126-157) are in a new 256GBv1 partition, as they have an improved specification versus the previous 256GB nodes.
Each node in the 256GBv1 partition has 56 logical cores (28 physical cores), using new Xeon v4 CPUS. This allows you increased parallelization versus the older 256GB nodes, which had 48 cores. You can expect increased speed for your jobs when running well-parallelized code.
To run a job on the new nodes, make sure to select the '256GBv1' partition from the portal web job submission page, or specify '256GBv1' as the partition in your SLURM job script.