1749577538461 Back to job #1, thriceby Kilian Cavalotti, Technical Lead & Architect, HPCNot once, not twice, but three times! For the third time in Sherlock’s history, the Slurm job ID counter was reset over the weekend, and went from job #67,043,327 all the way back to job #1! JobIDRaw Partition
1743037200000 Doubling the FLOPs, another milestone for Sherlock's performanceby Kilian Cavalotti, Technical Lead & Architect, HPC Hardware Event We’re proud to announce that Sherlock has reached another significant performance milestone. Building on past successes, Sherlock continues to evolve and expand, integrating new technologies and enhancing its capabilities to meet the
1667700685989 Job #1, again!by Kilian Cavalotti, Technical Lead & Architect, HPCThis is not the first time, we’ve been through this already (not so long ago, actually) but today, the Slurm job id counter was reset and went from job #67043327 back to job #1. JobID Partition Start ------------
1617408000000 3.3 PFlops: Sherlock hits expansion milestoneby Kilian Cavalotti, Technical Lead & Architect, High Performance Computing Hardware Event Sherlock is a traditional High-Performance Computing cluster in many aspects. But unlike most of similarly-sized clusters where hardware is purchased all at once, and refreshed every few years, it is in constant evolution. Almost like a
1589227740001 Job #1by Kilian CavalottiIf you’ve been submitting jobs on Sherlock over the last couple days, you probably noticed something different about your your job ids… They lost a couple digits! If you submitted a job last week, its job id was likely in the 67,000,000s.
1568310240001 🎉 Job #50,000,000!by Kilian Cavalotti Event We just wanted to share that Sherlock recently ran job #50,000,000! 🎈🎉 This is a significant milestone since Sherlock, in its current form[1], started running its first job in January 2017. Fifty million jobs in less than 3 years is no