Sherlock changelog

Sherlock goes full flash

by Stéphane Thiell & Kilian Cavalotti, Research Computing Team
Data
Hardware
Improvement
What could be more frustrating than anxiously waiting for your computing job to finish? Slow I/O that makes it take even longer is certainly high on the list. But not anymore! Fir, Sherlock’s scratch file system, has just undergone a major

Final hours announced for the June 2023 SRCF downtime

by Kilian Cavalotti, Technical Lead & Architect, HPC
Maintenance
Announce
As previously announced, the Stanford Research Computing Facility (SRCF), where Sherlock is hosted, will be powered off during the last week of June, in order to safely bring up power to the new SRCF2 datacenter. Sherlock will not be

Instant lightweight GPU instances are now available

by Kilian Cavalotti, Technical Lead & Architect, HPC
New
Hardware
We know that getting access to GPUs on Sherlock can be difficult and feel a little frustrating at times. Which is why we are excited to announce the immediate availability of our new instant lightweight GPU instances!

A new tool to help optimize job resource requirements

by Kilian Cavalotti, Technical Lead & Architect, HPC
It’s not always easy to determine the right amount of resources to request for a computing job. Making sure that the application will have enough resources to run properly, but avoiding over-requests that would make the jobs spend too much
Documentation
Scheduler
Improvement

SRCF is expanding

by Kilian Cavalotti, Technical Lead & Architect, HPC
Maintenance
In order to bring up a new building that will increase data center capacity, a full SRCF power shutdown is planned for late June 2023. It’s expected to last about a week, and Sherlock will be unavailable during that time.

ClusterShell on Sherlock

by Kilian Cavalotti, Technical Lead & Architect, HPC
Software
New
Ever wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running

Job #1, again!

by Kilian Cavalotti, Technical Lead & Architect, HPC
This is not the first time, we’ve been through this already (not so long ago, actually) but today, the Slurm job id counter was reset and went from job #67043327 back to job #1.
Event
Scheduler

A new interactive step in Slurm

by Kilian Cavalotti, Technical Lead & Architect, HPC
A new version of the sh_dev tool has been released, that leverages a recently-added Slurm feature. Slurm 20.11 introduced a new“interactive step”, designed to be used with salloc to automatically launch a terminal on an allocated compute
Improvement
Scheduler

Tracking NFS problems down to the SFP level

by Kilian Cavalotti
Blog
Data
Hardware
When NFS problems turn out to be... not NFS problems at all.