timestamp1734649786948Introducing Boltz-1 on Sherlockby Kilian Cavalotti, Technical Lead & Architect, HPCSoftwareWe're pleased to announce the availability of Boltz-1, a new open-source molecular interactions AI model recently released by MIT.
timestamp1724970751957Sherlock 4.0: a new cluster generationby Kilian Cavalotti, Technical Lead & Architect, HPCNewAnnounceHardwareWe are thrilled to announce that Sherlock 4.0, the fourth generation of Stanford's High-Performance Computing cluster, is now live! This major upgrade represents a significant leap forward in our computing capabilities, offering researchers
timestamp1700100000000A brand new Sherlock OnDemand experienceby Kilian Cavalotti, Technical Lead & Architect, HPCStanford Research Computing is proud to unveil Sherlock OnDemand 3.0, a cutting-edge enhancement to its computing and data storage resources, revolutionizing user interaction and efficiency.
timestamp1683930644259Final hours announced for the June 2023 SRCF downtimeby Kilian Cavalotti, Technical Lead & Architect, HPCMaintenanceAnnounceAs previously announced, the Stanford Research Computing Facility (SRCF), where Sherlock is hosted, will be powered off during the last week of June, in order to safely bring up power to the new SRCF2 datacenter. Sherlock will not be
timestamp1677204000000SRCF is expandingby Kilian Cavalotti, Technical Lead & Architect, HPCMaintenanceIn order to bring up a new building that will increase data center capacity, a full SRCF power shutdown is planned for late June 2023. It’s expected to last about a week, and Sherlock will be unavailable during that time.
timestamp1671038838657More free compute on Sherlock!by Kilian Cavalotti, Technical Lead & Architect, HPCAnnounceHardwareImprovementWe’re thrilled to announce that the free and generally available normal partition on Sherlock is getting an upgrade! With the addition of 24 brand new SH3_CBASE.1 compute nodes, each featuring one AMD EPYC 7543 Milan 32-core CPU and 256 GB
timestamp1670036242756ClusterShell on Sherlockby Kilian Cavalotti, Technical Lead & Architect, HPCSoftwareNewEver wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running
timestamp1667700685989Job #1, again!by Kilian Cavalotti, Technical Lead & Architect, HPCThis is not the first time, we’ve been through this already (not so long ago, actually) but today, the Slurm job id counter was reset and went from job #67043327 back to job #1.