Sherlock changelog

Sherlock 4.0: a new cluster generation

by Kilian Cavalotti, Technical Lead & Architect, HPC
New
Announce
Hardware
We are thrilled to announce that Sherlock 4.0, the fourth generation of Stanford's High-Performance Computing cluster, is now live! This major upgrade represents a significant leap forward in our computing capabilities, offering researchers

A brand new Sherlock OnDemand experience

by Kilian Cavalotti, Technical Lead & Architect, HPC
Stanford Research Computing is proud to unveil Sherlock OnDemand 3.0, a cutting-edge enhancement to its computing and data storage resources, revolutionizing user interaction and efficiency.
Announce
Improvement

Final hours announced for the June 2023 SRCF downtime

by Kilian Cavalotti, Technical Lead & Architect, HPC
Maintenance
Announce
As previously announced, the Stanford Research Computing Facility (SRCF), where Sherlock is hosted, will be powered off during the last week of June, in order to safely bring up power to the new SRCF2 datacenter. Sherlock will not be

More free compute on Sherlock!

by Kilian Cavalotti, Technical Lead & Architect, HPC
Announce
Hardware
Improvement
We’re thrilled to announce that the free and generally available normal partition on Sherlock is getting an upgrade! With the addition of 24 brand new SH3_CBASE.1 compute nodes, each featuring one AMD EPYC 7543 Milan 32-core CPU and 256 GB

From Rome to Milan, a Sherlock catalog update

by Kilian Cavalotti, Technical Lead & Architect, HPC
Announce
Hardware
It’s been almost a year and a half since we first introduced Sherlock 3.0 and its major new features: brand new CPU model and manufacturer, 2x faster interconnect, much larger and faster node-local storage, and more! We’ve now reached an

SH3_G4FP32 nodes are back in the catalog!

by Kilian Cavalotti
Hardware
Announce
A new GPU option is available in the Sherlock catalog... again! After a period of unavailability and a transition between GPU generations, where previous models were retired while new ones were not available yet, we're pleased to...