More free compute on Sherlock!
ImprovementWe’re thrilled to announce that the free and generally available normal partition on Sherlock is getting an upgrade!
With the addition of 24 brand new SH3_CBASE.1 compute nodes, each featuring one AMD EPYC 7543 Milan 32-core CPU and 256 GB
ClusterShell on SherlockEver wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running
Job #1, again!This is not the first time, we’ve been through this already (not so long ago, actually) but today, the Slurm job id counter was reset and went from job #67043327 back to job #1.
Keep up to date with software updatesTo help users stay on top of software changes on Sherlock, we’ve recently introduced a new software updates RSS feed. It’s available from the Sherlock software list page, and you can directly add it to your RSS reader of choice. And if
A new interactive step in SlurmA new version of the sh_dev tool has been released, that leverages a recently-added Slurm feature. Slurm 20.11 introduced a new“interactive step”, designed to be used with salloc to automatically launch a terminal on an allocated compute
Your Sherlock prompt just got a little smarterHave you ever felt confused when running things on Sherlock and wondered if your current shell was part of a job? And if so, which one? Well, maybe you noticed it already, but we’ve deployed a small improvement to the Sherlock shell prompt
Tracking NFS problems down to the SFP levelWhen NFS problems turn out to be... not NFS problems at all.
Sherlock factsEver wondered how many compute nodes is Sherlock made of? Or how many users are using it? Or how many Infiniband cables link it all together? Well, wonder no more: head to the Sherlock facts page and see for yourself! > hint: there are...
New GPU options in the Sherlock catalogToday, we're introducing the latest generation of GPU accelerators in the Sherlock catalog: the NVIDIA A100 Tensor Core GPU. Each A100 GPU features 9.7 TFlops of double-precision (FP64) performance, up to 312 TFlops for deep-learning...