Sherlock changelog

Introducing Boltz-1 on Sherlock

by Kilian Cavalotti, Technical Lead & Architect, HPC
Software
We're pleased to announce the availability of Boltz-1, a new open-source molecular interactions AI model recently released by MIT.

Sherlock 4.0: a new cluster generation

by Kilian Cavalotti, Technical Lead & Architect, HPC
New
Announce
Hardware
We are thrilled to announce that Sherlock 4.0, the fourth generation of Stanford's High-Performance Computing cluster, is now live! This major upgrade represents a significant leap forward in our computing capabilities, offering researchers

Storage quota units change: TB to TiB

by Kilian Cavalotti, Technical Lead & Architect, HPC
Following in Oak footsteps, we’re excited to announce that Sherlock is adopting a new unit of measure for file system quotas. Starting today, we're transitioning from Terabytes (TB) to Tebibytes (TiB) for all storage allocations on
Improvement
Data

Sherlock 4.0 is coming!

by Kilian Cavalotti, Technical Lead & Architect, HPC
New
Hardware
We are thrilled to announce that the next generation of Stanford's High-Performance Computing cluster is just around the corner. Mark your calendars for August 29, as we prepare to unveil Sherlock 4.0! Building on the success of previous

Sherlock goes full flash

by Stéphane Thiell & Kilian Cavalotti, Research Computing Team
Data
Hardware
Improvement
What could be more frustrating than anxiously waiting for your computing job to finish? Slow I/O that makes it take even longer is certainly high on the list. But not anymore! Fir, Sherlock’s scratch file system, has just undergone a major

Instant lightweight GPU instances are now available

by Kilian Cavalotti, Technical Lead & Architect, HPC
New
Hardware
We know that getting access to GPUs on Sherlock can be difficult and feel a little frustrating at times. Which is why we are excited to announce the immediate availability of our new instant lightweight GPU instances!

ClusterShell on Sherlock

by Kilian Cavalotti, Technical Lead & Architect, HPC
Software
New
Ever wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running