timestamp1724721093939Storage quota units change: TB to TiBby Kilian Cavalotti, Technical Lead & Architect, HPCFollowing in Oak footsteps, we’re excited to announce that Sherlock is adopting a new unit of measure for file system quotas. Starting today, we're transitioning from Terabytes (TB) to Tebibytes (TiB) for all storage allocations on
timestamp1707349764699Sherlock goes full flashby Stéphane Thiell & Kilian Cavalotti, Research Computing TeamDataHardwareImprovementWhat could be more frustrating than anxiously waiting for your computing job to finish? Slow I/O that makes it take even longer is certainly high on the list. But not anymore! Fir, Sherlock’s scratch file system, has just undergone a major
timestamp1700100000000A brand new Sherlock OnDemand experienceby Kilian Cavalotti, Technical Lead & Architect, HPCStanford Research Computing is proud to unveil Sherlock OnDemand 3.0, a cutting-edge enhancement to its computing and data storage resources, revolutionizing user interaction and efficiency.
timestamp1679706261451A new tool to help optimize job resource requirementsby Kilian Cavalotti, Technical Lead & Architect, HPCIt’s not always easy to determine the right amount of resources to request for a computing job. Making sure that the application will have enough resources to run properly, but avoiding over-requests that would make the jobs spend too much
timestamp1671038838657More free compute on Sherlock!by Kilian Cavalotti, Technical Lead & Architect, HPCAnnounceHardwareImprovementWe’re thrilled to announce that the free and generally available normal partition on Sherlock is getting an upgrade! With the addition of 24 brand new SH3_CBASE.1 compute nodes, each featuring one AMD EPYC 7543 Milan 32-core CPU and 256 GB
timestamp1670036242756ClusterShell on Sherlockby Kilian Cavalotti, Technical Lead & Architect, HPCSoftwareNewEver wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running
timestamp1667700685989Job #1, again!by Kilian Cavalotti, Technical Lead & Architect, HPCThis is not the first time, we’ve been through this already (not so long ago, actually) but today, the Slurm job id counter was reset and went from job #67043327 back to job #1.