timestamp1734649786948Introducing Boltz-1 on Sherlockby Kilian Cavalotti, Technical Lead & Architect, HPCSoftwareWe're pleased to announce the availability of Boltz-1, a new open-source molecular interactions AI model recently released by MIT.
timestamp1724970751957Sherlock 4.0: a new cluster generationby Kilian Cavalotti, Technical Lead & Architect, HPCNewAnnounceHardwareWe are thrilled to announce that Sherlock 4.0, the fourth generation of Stanford's High-Performance Computing cluster, is now live! This major upgrade represents a significant leap forward in our computing capabilities, offering researchers
timestamp1724450400000Sherlock 4.0 is coming!by Kilian Cavalotti, Technical Lead & Architect, HPCNewHardwareWe are thrilled to announce that the next generation of Stanford's High-Performance Computing cluster is just around the corner. Mark your calendars for August 29, as we prepare to unveil Sherlock 4.0! Building on the success of previous
timestamp1707349764699Sherlock goes full flashby Stéphane Thiell & Kilian Cavalotti, Research Computing TeamDataHardwareImprovementWhat could be more frustrating than anxiously waiting for your computing job to finish? Slow I/O that makes it take even longer is certainly high on the list. But not anymore! Fir, Sherlock’s scratch file system, has just undergone a major
timestamp1682557518100Instant lightweight GPU instances are now availableby Kilian Cavalotti, Technical Lead & Architect, HPCNewHardwareWe know that getting access to GPUs on Sherlock can be difficult and feel a little frustrating at times. Which is why we are excited to announce the immediate availability of our new instant lightweight GPU instances!
timestamp1671038838657More free compute on Sherlock!by Kilian Cavalotti, Technical Lead & Architect, HPCAnnounceHardwareImprovementWe’re thrilled to announce that the free and generally available normal partition on Sherlock is getting an upgrade! With the addition of 24 brand new SH3_CBASE.1 compute nodes, each featuring one AMD EPYC 7543 Milan 32-core CPU and 256 GB
timestamp1670036242756ClusterShell on Sherlockby Kilian Cavalotti, Technical Lead & Architect, HPCSoftwareNewEver wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running
timestamp1667700685989Job #1, again!by Kilian Cavalotti, Technical Lead & Architect, HPCThis is not the first time, we’ve been through this already (not so long ago, actually) but today, the Slurm job id counter was reset and went from job #67043327 back to job #1.