Sherlock changelog

Introducing Boltz-1 on Sherlock

by Kilian Cavalotti, Technical Lead & Architect, HPC
Software
We're pleased to announce the availability of Boltz-1, a new open-source molecular interactions AI model recently released by MIT.

Storage quota units change: TB to TiB

by Kilian Cavalotti, Technical Lead & Architect, HPC
Following in Oak footsteps, we’re excited to announce that Sherlock is adopting a new unit of measure for file system quotas. Starting today, we're transitioning from Terabytes (TB) to Tebibytes (TiB) for all storage allocations on
Improvement
Data

Sherlock goes full flash

by Stéphane Thiell & Kilian Cavalotti, Research Computing Team
Data
Hardware
Improvement
What could be more frustrating than anxiously waiting for your computing job to finish? Slow I/O that makes it take even longer is certainly high on the list. But not anymore! Fir, Sherlock’s scratch file system, has just undergone a major

A brand new Sherlock OnDemand experience

by Kilian Cavalotti, Technical Lead & Architect, HPC
Stanford Research Computing is proud to unveil Sherlock OnDemand 3.0, a cutting-edge enhancement to its computing and data storage resources, revolutionizing user interaction and efficiency.
Announce
Improvement

A new tool to help optimize job resource requirements

by Kilian Cavalotti, Technical Lead & Architect, HPC
It’s not always easy to determine the right amount of resources to request for a computing job. Making sure that the application will have enough resources to run properly, but avoiding over-requests that would make the jobs spend too much
Documentation
Scheduler
Improvement

More free compute on Sherlock!

by Kilian Cavalotti, Technical Lead & Architect, HPC
Announce
Hardware
Improvement
We’re thrilled to announce that the free and generally available normal partition on Sherlock is getting an upgrade! With the addition of 24 brand new SH3_CBASE.1 compute nodes, each featuring one AMD EPYC 7543 Milan 32-core CPU and 256 GB

ClusterShell on Sherlock

by Kilian Cavalotti, Technical Lead & Architect, HPC
Software
New
Ever wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running