urn:noticeable:projects:bYyIewUV308AvkMztxixSherlock changelogwww.sherlock.stanford.edu2024-08-27T01:20:47.265ZCopyright © SherlockNoticeablehttps://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/newspages/GtmOI32wuOUPBTrHaeki/01h55ta3gs1vmdhtqqtjmk7m4z-header-logo.pnghttps://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/newspages/GtmOI32wuOUPBTrHaeki/01h55ta3gs1vmdhtqqtjmk7m4z-header-logo.png#8c1515urn:noticeable:publications:JMFrsMu7VZtYdvlLzNCa2024-08-27T01:11:33.939Z2024-08-27T01:20:47.265ZStorage quota units change: TB to TiBFollowing in Oak footsteps, we’re excited to announce that Sherlock is adopting a new unit of measure for file system quotas.
Starting today, we're transitioning from Terabytes (TB) to Tebibytes (TiB) for all storage allocations on...<p>Following in <a href="https://uit.stanford.edu/service/oak-storage?utm_source=noticeable&utm_campaign=sherlock.storage-quota-units-change-tb-to-tib-1&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.JMFrsMu7VZtYdvlLzNCa&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Oak storage">Oak</a> footsteps, we’re excited to announce that Sherlock is adopting a new unit of measure for file system quotas. </p><p>Starting today, we're transitioning from Terabytes (<a href="https://www.nist.gov/pml/owm/metric-si-prefixes?utm_source=noticeable&utm_campaign=sherlock.storage-quota-units-change-tb-to-tib-1&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.JMFrsMu7VZtYdvlLzNCa&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="metric prefixes">TB</a>) to Tebibytes (<a href="https://physics.nist.gov/cuu/Units/binary.html?utm_source=noticeable&utm_campaign=sherlock.storage-quota-units-change-tb-to-tib-1&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.JMFrsMu7VZtYdvlLzNCa&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="binary prefixes">TiB</a>) for all storage allocations on Sherlock file systems. Users and groups will automatically benefit from expanded storage allocations, with no action required on their part.</p><ul><li><p><code>$HOME</code> will now hold 15 GiB of data (<em>vs.</em> 15 GB before)</p></li><li><p><code>$GROUP_HOME</code> 1 TiB of data (<em>vs.</em> 1 TB before)</p></li><li><p><code>$SCRATCH</code> will allow 100 TiB (<em>vs.</em> 100 TB before)</p></li><li><p><code>$GROUP_SCRATCH</code> will allow 100 TiB (<em>vs.</em> 100 TB before)</p></li></ul><p>The tech industry has adopted TiB as the standard unit of measure for data storage, and Sherlock is now aligning with those practices, which will provide all users with approximately 9.95% more usable storage capacity. It also ensures better compatibility with common Linux tools like <code>df</code> and <code>du</code>, which typically display disk usage in GiB/TiB by default. The Sherlock documentation will also continue to display units as TB, as is a standard practice among the industry.</p><blockquote><p>Existing inode quotas on <code>$SCRATCH</code> and <code>$GROUP_SCRATCH</code> will not change, and each space will still allow storing 20 million inodes.</p></blockquote><p>This change is being implemented automatically, and users will start enjoying the benefits of increased storage capacity immediately.<br><br>As usual, if you have any question or comment, please don’t hesitate to reach out to Research Computing at <a href="mailto:[email protected]" rel="noopener nofollow" target="_blank" title="[email protected]">[email protected]</a>. </p>Kilian Cavalotti[email protected]urn:noticeable:publications:VKxO5IXJlMStQurJnpwv2024-02-07T23:49:24.699Z2024-02-08T00:29:40.623ZSherlock goes full flashWhat could be more frustrating than anxiously waiting for your computing job to finish? Slow I/O that makes it take even longer is certainly high on the list. But not anymore! Fir, Sherlock’s scratch file system, has just undergone a major...<p>What could be more frustrating than anxiously waiting for your computing job to finish? Slow I/O that makes it take even longer is certainly high on the list. But not anymore! <a href="https://news.sherlock.stanford.edu/publications/a-new-scratch?utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Fir"><strong>Fir</strong></a><strong>,</strong> <strong>Sherlock’s scratch file system, has just undergone a major tech face-lift: it’s now</strong> <strong>a 10 PB all-flash storage system, providing an aggregate bandwidth of</strong> <strong>400 GB/sec</strong> (and >800 kIOPS). Bringing Sherlock’s high-performance parallel scratch file system into the era of flash storage was not just a routine maintenance task, but a significant leap into the future of HPC and AI computing.</p><h2>But first, a little bit of context </h2><p>Traditionally, High-Performance Computing clusters face a challenge when dealing with modern, data-intensive applications. Existing HPC storage systems, long designed with spinning disks to provide efficient and parallel sequential read/write operations, often become bottlenecks for modern workloads generated by AI/ML or CryoEM applications. Those demand substantial data storage and processing capabilities, putting a strain on traditional systems.</p><p>So to accommodate those new needs and future evolution of the HPC I/O landscape, we at Stanford Research Computing, with the generous support of the <a href="https://doresearch.stanford.edu/who-we-are/office-vice-provost-and-dean-research?utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Office of the Stanford VPDoR">Vice Provost and Dean of Research</a>, have been hard at work for over two years, revamping Sherlock's scratch with an all-flash system. </p><p>And it was not just a matter of taking delivery of a new turn-key system. As most things we do, it was done entirely in-house: from the original vendor-agnostic design, upgrade plan, budget requests, procurement, gradual in-place hardware replacement at the Stanford Research Computing Facility (SRCF), deployment and validation, performance benchmarks, to the final production stages, all of those steps were performed with minimum disruption for all Sherlock users.</p><h2>The technical details</h2><p>The <code>/scratch</code> file system on Sherlock is using <a href="https://wiki.lustre.org/?utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Lustre">Lustre</a>, an open-source, parallel file system that supports many requirements of leadership class HPC environments. And as you probably know by now, Stanford Research Computing loves <a href="https://github.com/stanford-rc?utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="open source">open source</a>! We actively contribute to the Lustre community and are a proud member of <a href="https://opensfs.org/?utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="OpenSFS">OpenSFS</a>, a non-profit industry organization that supports vendor-neutral development and promotion of Lustre.</p><p>In Lustre, file metadata and data are stored separately, with Object Storage Servers (OSS) serving file data on the network. Each OSS pair and associated storage devices forms an I/O cell, and Sherlock's scratch has just bid farewell to its old HDD-based I/O cells. In their place, new flash-based I/O cells have taken the stage, each equipped with 96 x 15.35TB SSDs, delivering mind-blowing performance.</p><p>Sherlock’s <code>/scratch</code> has 8 I/O cells and the goal was to replace every one of them. Our new I/O cell has 2 OSS with Infiniband HDR at 200Gb/s (or 25GB/s) connected to 4 storage chassis, each with 24 x 15.35TB SSD (dual-attached 12Gb/s SAS), as pictured below:</p><p><span style="color: #000000;"></span></p><figure><img src="https://lh7-us.googleusercontent.com/gI-D9jEmQeMntz4clh3TNYF60Q6Xep5cMcwQqHL3TGX_9H7L0m_6MgjDlPfSQrUtSBsh5l9bVa8Nddamm4BHzsQwk1S5Q5s9Wq_i8wdGGcXXnOD5wW_kqTJDQXjdwGEb7VYN1gSNPHccCYBc9iEzgTM" alt="" height="284" loading="lazy" title="" width="562"></figure><br><br>Of course, you can’t just replace each individual rotating hard-drive with a SSD, there are some infrastructure changes required, and some reconfiguration needed. The upgrade, executed between January 2023 and January 2024, was a seamless transition. Old HDD-based I/O cells were gracefully retired, one by one, while flash-based ones progressively replaced them, gradually boosting performance for all Sherlock users throughout the year.<br><span style="color: #000000;"><figure><img src="https://lh7-us.googleusercontent.com/B7lwfOxhKxKc-kDeQZkZ63exdm99PnDvete7-03-wD3906KQ_BaUOAGpzuNRa1nrZ_UdcCz_XcPusFZGA60zH6xWSMR60WDz-C6q-qg2BetwYGf1Ytpevnr0Hg5cN9kVPnEVRkeRRfqJBXje3AvmAXo" alt="" height="332" loading="lazy" title="" width="472"></figure></span><br>All of those replacements happened while the file system was up and running, serving data to the thousands of computing jobs that run on Sherlock every day. Driven by our commitment to minimize disruptions to users, our top priority was to ensure uninterrupted access to data throughout the upgrade. Data migration is never fun, and we wanted to avoid having to ask users to manually transfer their files to a new, separate storage system. This is why we developed and <a href="https://git.whamcloud.com/?p=fs%2Flustre-release.git;a=commit;h=1121816c4a4e1bb2ef097c4a9802362181c43800&utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="del_ost commit">contributed</a> a new feature in Lustre, which allowed us to seamlessly remove existing storage devices from the file system, before the new flash drives could be added. More technical details about the upgrade have been <a href="http://www.eofs.eu/wp-content/uploads/2024/02/2.5-stanfordrc_s_thiell.pdf?utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="presentation slides">presented</a> during the <a href="https://www.eofs.eu/index.php/events/lad-22/?utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="LAD'22">LAD’22</a> conference.<p></p><p><strong>Today, we are happy to announce that the upgrade is officially complete, and Sherlock stands proud with a whopping 9,824 TB of solid-state storage in production. No more spinning disks in sight!</strong></p><h2>Key benefits</h2><p>For users, the immediately visible benefits are quicker access to their files, faster data transfers, shorter job execution times for I/O intensive applications. More specifically, every key metric has been improved:</p><ul><li><p>IOPS: over <strong>100x</strong> (results may vary, see below)</p></li><li><p>Backend bandwidth: <strong>6x</strong> (128 GB/s to 768 GB/s)</p></li><li><p>Frontend bandwidth: <strong>2x</strong> (200 GB/s to 400 GB/s)</p></li><li><p>Usable volume: <strong>1.6x</strong> (6.1 PB to 9.8 PB)<br></p></li></ul><p>In terms of measured improvement, the graph below shows the impact of moving to full-flash storage for reading data from 1, 8 and 16 compute nodes, compared to the previous <code>/scratch</code> file system: </p><p><span style="color: #000000;"></span></p><figure><img src="https://lh7-us.googleusercontent.com/a1wBmS1DW--_SfmLz5iyYRChlTp8MSuE7VKNKinX2nBgzb6iRiNeiSqa5zuXQrTvN1YztMqTLBVPdc_gqA1lrqOpQh7ZA1FzsNdS4VToP_okzXIhbWdzS2rWtUD33joDAaFV4m7eSMQp6DB8se6PY_Y" alt="" height="387" loading="lazy" title="" width="624"></figure><p></p><p>And we even tried to replicate the I/O patterns of <a href="https://github.com/google-deepmind/alphafold?utm_source=noticeable&utm_campaign=sherlock.sherlock-goes-full-flash&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.VKxO5IXJlMStQurJnpwv&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="AlphaFold">AlphaFold</a>, a well-known AI model to predict protein structure, and the benefits are quite significant, with up to 125x speedups in some cases:</p><p><span style="color: #000000;"></span></p><figure><img src="https://lh7-us.googleusercontent.com/4qvJD4MDJwjdlyKLcE4F24ZaaqanbQHjS1CkxPVWvzBKHphgLLAfa0QoepWrbOYOtwLFnYLrwLHTyS1NatKDItsDI63mlC1mxhac6RSFKSHCLyiEOykLBnHw7ziqM5uQ0VTVmmLd5BPPJpNF6bNUN70" alt="" height="335" loading="lazy" title="" width="624"></figure><br><br>This upgrade is a major improvement that will benefit all Sherlock users, and Sherlock’s enhanced I/O capabilities will allow them to approach data-intensive tasks with unprecedented efficiency. We hope it will help support the ever-increasing computing needs of the Stanford research community, and enable even more breakthroughs and discoveries. <p></p><p>As usual, if you have any question or comment, please don’t hesitate to reach out to Research Computing at <a href="mailto:[email protected]" rel="noopener nofollow" target="_blank" title="[email protected]">[email protected]</a>. 🚀🔧<br><br></p>Stéphane Thiell & Kilian Cavalotti[email protected]urn:noticeable:publications:tkzeo34ezqhztdmSbO5B2023-11-16T02:00:00Z2023-11-16T02:21:28.317ZA brand new Sherlock OnDemand experienceStanford Research Computing is proud to unveil Sherlock OnDemand 3.0, a cutting-edge enhancement to its computing and data storage resources, revolutionizing user interaction and efficiency.<p>Following a long tradition of <a href="https://news.sherlock.stanford.edu/publications/sherlock-on-demand?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Sherlock OnDemand">announces</a> and <a href="https://news.sherlock.stanford.edu/publications/sherlock-goes-container-native?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Sherlock goes container native">releases</a> during the <a href="https://supercomputing.org/?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="SuperComputing conference">SuperComputing</a> conference, and while <a href="https://sc23.supercomputing.org/?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="SC23">SC23</a> is underway in Denver CO, <strong>Stanford Research Computing is proud to unveil Sherlock OnDemand 3.0,</strong> a cutting-edge enhancement to its computing and data storage resources, revolutionizing user interaction and efficiency. <br><br><strong>The upgraded Sherlock OnDemand is available immediately at </strong><a href="https://ondemand.sherlock.stanford.edu?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Sherlock OnDemand"><strong>https://ondemand.sherlock.stanford.edu</strong></a> </p><p></p><figure><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/tkzeo34ezqhztdmSbO5B/01hfaqwynskjp4v9s198vs7ppg-image.png" alt="" loading="lazy" title=""></figure><p></p><p><span style="color: var(--text-primary);">This new release brings a host of transformative changes. A lot happened under the hood, but the visible changes are significant as well.</span></p><p><strong><span style="color: var(--text-primary);">Infrastructure upgrades:</span></strong></p><ul><li><p><strong><span style="color: var(--tw-prose-bold);">A new URL:</span></strong> Sherlock OnDemand is now accessible at <a href="https://ondemand.sherlock.stanford.edu?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank"><span style="color: rgba(41,100,170,var(--tw-text-opacity));">https://ondemand.sherlock.stanford.edu</span></a>, in line<span style="color: rgb(15, 15, 15);"> with our other instances, for a more homogeneous </span>user experience across Research Computing systems. The previous URL will still work for a time, and redirections will be progressively deployed to ease the transition.</p></li><li><p><strong><span style="color: var(--tw-prose-bold);">New engine, same feel:</span></strong> a lot of internal components have undergone substantial updates, but the familiar interface remains intact, ensuring a seamless transition for existing users.</p></li><li><p><strong><span style="color: var(--tw-prose-bold);">Streamlined authentication:</span></strong> Sherlock OnDemand now uses <a href="https://openid.net/?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="OpenID">OIDC</a> via the Stanford central Identity Provider instead of SAML, resulting in a lighter, more robust configuration for enhanced security.</p></li><li><p><strong><span style="color: var(--tw-prose-bold);">Enhanced Performance:</span></strong> expect a more responsive interface and improved reliability with the eradication of 422 HTTP errors.</p></li></ul><h2><strong><span style="color: var(--text-primary);">User-centric features:</span></strong></h2><ul><li><p><strong><span style="color: var(--tw-prose-bold);">Expanded file access:</span></strong> all your <a href="https://uit.stanford.edu/service/oak-storage?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Oak">Oak</a> groups, are now conveniently listed in the embedded file browser for easier and more comprehensive access to your data. And if you have <code>rclone</code> remotes already configured on Sherlock, you’'ll find them there as well!</p></li><li><p><strong><span style="color: var(--tw-prose-bold);">Effortless support tickets:</span></strong> you can now send support tickets directly from the OnDemand interface, which will automatically include contextual information about your interactive sessions, to simply issue resolution.</p></li><li><p><strong><span style="color: var(--tw-prose-bold);">New interactive apps:</span></strong> In addition to the existing apps, VS Code server, MATLAB, and JupyterLab join the platform, offering expanded functionalities, like the ability of loading and unloading of modules within JupyterLab directly.<br><em>Yes, you read that right: we now have <strong>VS Code</strong> and <strong>MATLAB</strong> in Sherlock OnDemand!</em><br>The RStudio app has also been rebuilt from the ground up, providing a much better and reliable experience.</p><p style="text-align: center;"></p><figure><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/tkzeo34ezqhztdmSbO5B/01hfaxb83938p5dwqfxj532jp6-image.png" alt="" loading="lazy" title=""></figure><p></p></li><li><p><strong><span style="color: var(--tw-prose-bold);">Customizable working directories:</span></strong> users can now select a working directory across all interactive apps, for easier customization of their work environment.</p></li></ul><p><span style="color: var(--text-primary);">For more details and guidance on using the new features, check out the updated documentation at </span><a href="https://www.sherlock.stanford.edu/docs/user-guide/ondemand/.?utm_source=noticeable&utm_campaign=sherlock.a-brand-new-sherlock-ondemand-experience&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.tkzeo34ezqhztdmSbO5B&utm_medium=newspage" rel="noopener nofollow" target="_blank"><span style="color: var(--text-primary);">https://www.sherlock.stanford.edu/docs/user-guide/ondemand/.</span></a><span style="color: var(--text-primary);"><br></span><strong><span style="color: var(--text-primary);"><br>This update delivers a brand new computing experience, designed to empower you in your work. </span></strong><span style="color: var(--text-primary);">Sherlock OnDemand 3.0 marks a significant milestone in optimizing user access to computing resources, lowering the barrier to entry for new users, and empowering researchers with an unparalleled computing environment. We're excited to see how it will enhance your productivity and efficiency, so dive into this transformative experience today and elevate your computing endeavors to new heights with Sherlock OnDemand 3.0!<br><br>And as usual, if you have any question, comment or suggestion, don’t hesitate to reach out at </span><a href="mailto:[email protected]" rel="noopener nofollow" target="_blank" title="support"><span style="color: var(--text-primary);">[email protected]</span></a><span style="color: var(--text-primary);">. </span></p>Kilian Cavalotti[email protected]urn:noticeable:publications:SAz2fLkjN80X6CGoMnHX2023-03-25T01:04:21.451Z2023-03-25T01:16:33.972ZA new tool to help optimize job resource requirementsIt’s not always easy to determine the right amount of resources to request for a computing job. Making sure that the application will have enough resources to run properly, but avoiding over-requests that would make the jobs spend too much...<p>It’s not always easy to determine the right amount of resources to request for a computing job. Making sure that the application will have enough resources to run properly, but avoiding over-requests that would make the jobs spend too much time waiting in queue for resources they won’t be using.<br><br>To help users inform those choices, we’ve just added a new tool to the <a href="https://www.sherlock.stanford.edu/docs/software/list/?utm_source=noticeable&utm_campaign=sherlock.a-new-tool-to-help-optimize-job-resource-requirements&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SAz2fLkjN80X6CGoMnHX&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Software list">module list</a> on Sherlock. <code><a href="https://github.com/JanneM/Ruse?utm_source=noticeable&utm_campaign=sherlock.a-new-tool-to-help-optimize-job-resource-requirements&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SAz2fLkjN80X6CGoMnHX&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Ruse project webpage">ruse</a></code> is command-line tool developed by <a href="https://github.com/JanneM?utm_source=noticeable&utm_campaign=sherlock.a-new-tool-to-help-optimize-job-resource-requirements&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SAz2fLkjN80X6CGoMnHX&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Jan Moren GitHub profile page">Jan Moren</a> which facilitates measuring processes’ resource usage. It periodically measures the resource use of a process and its sub-processes, and can help users find out how much resource to allocate to their jobs. It will determine the actual memory, execution time and cores that individual programs or MPI applications need to request in their job submission options.<br><br>You’ll find more information and some examples in the Sherlock documentation at <a href="https://www.sherlock.stanford.edu/docs/user-guide/running-jobs/?utm_source=noticeable&utm_campaign=sherlock.a-new-tool-to-help-optimize-job-resource-requirements&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SAz2fLkjN80X6CGoMnHX&utm_medium=newspage#resource-requests" rel="noopener nofollow" target="_blank">https://www.sherlock.stanford.edu/docs/user-guide/running-jobs/#resource-requests</a> <br><br>Hopefully <code>ruse</code> will make it easier to write job resource requests , and allow users to get a better understanding of their applications’ behavior to take better advantage of Sherlock’s capabilities.</p><p>As usual, if you have any question or comment, please don’t hesitate to reach out at <a href="mailto:[email protected]" rel="noopener" target="_blank">[email protected]</a>.</p>Kilian Cavalotti[email protected]urn:noticeable:publications:MARmnxM2JHvznq8MaK6q2022-12-14T17:27:18.657Z2022-12-14T17:27:26.687ZMore free compute on Sherlock!We’re thrilled to announce that the free and generally available normal partition on Sherlock is getting an upgrade!
With the addition of 24 brand new SH3_CBASE.1 compute nodes, each featuring one AMD EPYC 7543 Milan 32-core CPU and 256 GB...<p>We’re thrilled to announce that the free and generally available <code>normal</code> partition on Sherlock is getting an upgrade!<br><br>With the addition of 24 brand new <a href="https://www.sherlock.stanford.edu/docs/orders/?h=cbase&utm_source=noticeable&utm_campaign=sherlock.more-free-compute-on-sherlock&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.MARmnxM2JHvznq8MaK6q&utm_medium=newspage#configurations" rel="noopener nofollow" target="_blank" title="Sherlock node configurations">SH3_CBASE.1</a> compute nodes, each featuring one <a href="https://www.amd.com/en/products/cpu/amd-epyc-7543?utm_source=noticeable&utm_campaign=sherlock.more-free-compute-on-sherlock&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.MARmnxM2JHvznq8MaK6q&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="AMD EPYC 7543">AMD EPYC 7543</a> Milan 32-core CPU and 256 GB of RAM, Sherlock users now have 768 more CPU cores at there disposal. Those new nodes will complete the existing 154 compute nodes and 4,032 core in that partition, for a <strong>new total of 178 nodes and 4,800 CPU cores.</strong><br><br>The <code>normal</code> partition is Sherlock’s shared pool of compute nodes, which is available <a href="https://www.sherlock.stanford.edu/?utm_source=noticeable&utm_campaign=sherlock.more-free-compute-on-sherlock&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.MARmnxM2JHvznq8MaK6q&utm_medium=newspage#how-much-does-it-cost" rel="noopener nofollow" target="_blank" title="Sherlock cost">free of charge</a> to all Stanford Faculty members and their research teams, to support their wide range of computing needs. <br><br>In addition to this free set of computing resources, Faculty can supplement these shared nodes by <a href="https://www.sherlock.stanford.edu/docs/orders/?utm_source=noticeable&utm_campaign=sherlock.more-free-compute-on-sherlock&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.MARmnxM2JHvznq8MaK6q&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Purchasing Sherlock compute nodes">purchasing additional compute nodes</a>, and become Sherlock owners. By investing in the cluster, PI groups not only receive exclusive access to the nodes they purchased, but also get access to all of the other owner compute nodes when they're not in use, thus giving them access to the <a href="https://www.sherlock.stanford.edu/docs/tech/facts/?utm_source=noticeable&utm_campaign=sherlock.more-free-compute-on-sherlock&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.MARmnxM2JHvznq8MaK6q&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Sherlock facts">whole breadth of Sherlock resources</a>, currently over over 1,500 compute nodes, 46,000 CPU cores and close to 4 PFLOPS of computing power.<br><br>We hope that this new expansion of the <code>normal</code> partition, made possible thanks to additional funding provided by the University Budget Group as part of the FY23 budget cycle, will help support the ever-increasing computing needs of the Stanford research community, and enable even more breakthroughs and discoveries.<br><br>As usual, if you have any question or comment, please don’t hesitate to reach out at <a href="mailto:[email protected]" rel="noopener" target="_blank">[email protected]</a>.<br><br><br><br></p>Kilian Cavalotti[email protected]urn:noticeable:publications:SSx9LtIFOW9O3ULcMqGE2021-06-03T20:18:40.986Z2021-06-03T21:30:45.152ZA new interactive step in SlurmA new version of the sh_dev tool has been released, that leverages a recently-added Slurm feature. Slurm 20.11 introduced a new“interactive step”, designed to be used with salloc to automatically launch a terminal on an allocated compute...<blockquote><p>A new version of the <code>sh_dev</code> tool has been released, that leverages a recently-added <a href="https://slurm.schedmd.com/?utm_source=noticeable&utm_campaign=sherlock.a-new-interactive-step-in-slurm&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SSx9LtIFOW9O3ULcMqGE&utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Slurm">Slurm</a> feature.</p></blockquote><p>Slurm 20.11 introduced a new <a href="https://github.com/SchedMD/slurm/blob/slurm-20-11-0-1/RELEASE_NOTES?utm_source=noticeable&utm_campaign=sherlock.a-new-interactive-step-in-slurm&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SSx9LtIFOW9O3ULcMqGE&utm_medium=newspage#L72-L74" rel="noopener nofollow" target="_blank">“interactive step”</a> , designed to be used with <code><a href="https://slurm.schedmd.com/salloc.html?utm_source=noticeable&utm_campaign=sherlock.a-new-interactive-step-in-slurm&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SSx9LtIFOW9O3ULcMqGE&utm_medium=newspage" rel="noopener nofollow" target="_blank">salloc</a></code> to automatically launch a terminal on an allocated compute node. This new type of job step resolves a number of problems with the previous interactive job approaches, both in terms of accounting and resource allocation.</p><h2>What is this about?</h2><p>In previous versions, launching an interactive job with <code>srun --pty bash</code> would create a step 0, that was consuming resources, especially Generic Resources (GRES, <em>ie.</em> GPUs). Among other things, it made it impossible to use <code>srun</code> within that allocation to launch subsequent steps. Any attempt would result in a “step creation temporarily disabled” error message.<br><br>Now, with this new feature, you can use <code>salloc</code> to directly open a shell on a compute node. The new interactive step won’t consume any of the allocated resources, so you’ll be able to start additional steps with <code>srun</code> within your allocation. <br><br><code>sh_dev</code> (<em>aka</em> <code>sdev</code>) has been updated to use interactive steps.</p><h2>What changes?</h2><h3>For <code>sh_dev</code></h3><p>On the surface, nothing changes: you can continue to use <code>sh_dev</code> exactly like before, to start an interactive session on one of the compute nodes dedicated to that task (the default), or on a node in any partition (which is particularly popular among node owners). You’ll be able to use the same options, with the same features (including X11 forwarding). <br>Under the hood, though, you’ll be leveraging the new interactive step automatically.</p><h3>For <code>salloc</code></h3><p>If you use <code>salloc</code> on a regular basis, the main change is that the resulting shell will open on the first allocated node, instead of the node you ran <code>salloc</code> on:</p><pre><code>[kilian@sh01-ln01 login ~]$ salloc
salloc: job 25753490 has been allocated resources
salloc: Granted job allocation 25753490
salloc: Nodes sh02-01n46 are ready for job
[kilian@sh02-01n46 ~] (job 25753490) $ </code></pre><p>If you want to keep that initial shell on the submission host, you can simply specify a command as an argument, and the resulting command will continue to be executed as the calling user on the calling host:</p><pre><code>[kilian@sh01-ln01 login ~]$ salloc bash
salloc: job 25752889 has been allocated resources
salloc: Granted job allocation 25752889
salloc: Nodes sh02-01n46 are ready for job
[kilian@sh01-ln01 login ~] (job 25752889) $</code></pre><h3>For <code>srun</code></h3><p>If you’re used to run <code>srun —pty bash</code> to get a shell on a compute node, you can continue to do so (as long as you don’t intend to run additional steps within the allocation). </p><p>But you can also just type <code>salloc</code>, get a more usable shell, and save 60% in keystrokes!</p><p></p><hr><p>Happy computing! And as usual, please feel free to <a href="mailto:[email protected]" rel="noopener" target="_blank">reach out</a> if you have comments or questions.</p>Kilian Cavalotti[email protected]urn:noticeable:publications:zPqsfCULbRM6PBu0idKl2021-05-14T18:37:00Z2021-05-14T19:59:42.377ZYour Sherlock prompt just got a little smarterHave you ever felt confused when running things on Sherlock and wondered if your current shell was part of a job? And if so, which one? Well, maybe you noticed it already, but we’ve deployed a small improvement to the Sherlock shell prompt...<p>Have you ever felt confused when running things on <a href="www.sherlock.stanford.edu" rel="noopener nofollow" target="_blank" title="Sherlock">Sherlock</a> and wondered if your current shell was part of a job? And if so, which one? Well, maybe you noticed it already, but we’ve deployed a small improvement to the Sherlock shell prompt (the thing that displays your user name and the host name of the node you’re on) that will hopefully makes things a little easier to navigate.<br><br><strong>Now, when you’re in the context of a Slurm job, your shell prompt will automatically display that job’s id, so you always know where you’re at.<br></strong><br>For instance, when you submit an interactive job with <code>sdev</code>, your prompt will automatically be updated to not only display the host name of the compute node you’ve been allocated, but also the id of the job your new shell is running in:</p><pre><code>[kilian@sh03-ln06 login ~]$ sdev
srun: job 24333698 queued and waiting for resources
srun: job 24333698 has been allocated resources
[kilian@sh02-01n58 ~] (job 24333698) $</code></pre><h2>Use cases</h2><p>This additional information could prove particularly useful in situations where the fact that you’re running in the context of a Slurm job is not immediately visible. </p><h3>Dynamic resource allocation</h3><p>For instance, when allocating resources with <code>salloc</code>, the scheduler will start a new shell on the same node you’re on, but nothing will differentiate that shell from your login shell, so it’s pretty easy to forget that you’re in a job (and also that if you exit that shell, you’ll terminate your resource allocation).<br><br>So now, when you use <code>salloc</code>, your prompt will be updated as well, so you’ll always know you’re in a job:</p><pre><code>[kilian@sh03-ln06 login ~]$ salloc -N 4 --time 2:0:0
salloc: Pending job allocation 24333807
[...]
[kilian@sh03-ln06 login ~] (job 24333807) $ srun hostname
sh03-01n25.int
sh03-01n28.int
sh03-01n27.int
sh03-01n30.int
[kilian@sh03-ln06 login ~] (job 24333807) $ exit
salloc: Relinquishing job allocation 24333807
[kilian@sh03-ln06 login ~]$</code></pre><h3>Connecting to computing nodes </h3><p>Another case is when you need to connect via SSH to compute nodes where your jobs are running. The scheduler will automatically inject your SSH session in the context of the running job, and now, you’ll see that <em>jobid</em> automatically displayed in your prompt, like this:</p><pre><code>[kilian@sh03-ln06 login ~]$ sbatch sleep.sbatch
Submitted batch job 24334257
[kilian@sh03-ln06 login ~]$ squeue -j 24334257 -O nodelist -h
sh02-01n47
[kilian@sh03-ln06 login ~]$ ssh sh02-01n47
------------------------------------------
Sherlock compute node
>> deployed Fri Apr 30 23:36:45 PDT 2021
------------------------------------------
[kilian@sh02-01n47 ~] (job 24334257) $</code></pre><h3>Step creation temporarily disabled</h3><p>Have you ever encountered that message when submitting a job?<br><code>step creation temporarily disabled, retrying (Requested nodes are busy)</code></p><p>That usually means that you’re trying to run a job from within a job: the scheduler tries to allocate resources that are already allocated to your current shell, so it waits until those resources become available. Of course, that never happens, so it waits here forever. Or until your job time runs out…<br><br>Now, a quick glance at your prompt will show you that you’re already in a job, so it will hopefully help catching those situations:</p><pre><code>[kilian@sh03-ln06 login ~]$ srun --pty bash
srun: job 24334422 queued and waiting for resources
srun: job 24334422 has been allocated resources
[kilian@sh02-01n47 ~] (job 24334422) $ srun --pty bash
srun: Job 24334422 step creation temporarily disabled, retrying (Requested nodes are busy)
srun: Job 24334422 step creation still disabled, retrying (Requested nodes are busy)
srun: Job 24334422 step creation still disabled, retrying (Requested nodes are busy)
srun: Job 24334422 step creation still disabled, retrying (Requested nodes are busy)</code></pre><p></p><hr><p>We hope that small improvement will help make things easier and more visible when navigating jobs on Sherlock. Sometimes, it’s the little things, they say. :)<br><br>As usual, please feel free to <a href="mailto:[email protected]" rel="noopener" target="_blank">reach out</a> if you have comments or questions!</p>Kilian Cavalotti[email protected]urn:noticeable:publications:PPxLNPT4SMCJbp0ORhd72020-11-13T00:12:00.001Z2020-11-13T02:45:02.640ZSherlock factsEver wondered how many compute nodes is Sherlock made of? Or how many users are using it? Or how many Infiniband cables link it all together? Well, wonder no more: head to the Sherlock facts page and see for yourself! > hint: there are...<p>Ever wondered how many compute nodes is Sherlock made of? Or how many users are using it? Or how many Infiniband cables link it all together?</p>
<p>Well, wonder no more: head to the <a href="https://www.sherlock.stanford.edu/docs/overview/tech/facts?utm_source=noticeable&utm_campaign=sherlock.sherlock-facts&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.PPxLNPT4SMCJbp0ORhd7&utm_medium=newspage" target="_blank" rel="noopener">Sherlock facts</a> page and see for yourself!</p>
<blockquote>
<p><em>hint</em>: there are <strong>a lot</strong> of cables :)</p>
</blockquote>
<p>And if you’re tired of seeing the some old specs from two years ago, we’ve updated the <a href="https://www.sherlock.stanford.edu/docs/overview/tech/specs?utm_source=noticeable&utm_campaign=sherlock.sherlock-facts&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.PPxLNPT4SMCJbp0ORhd7&utm_medium=newspage" target="_blank" rel="noopener">Sherlock tech specs</a> page too!</p>
<p>To make sure those numbers never fall behind again and continue to offer an accurate representation of Sherlock’s resources, they will be automatically be updated each time something changes on the cluster.</p>
<p>As usual, don’t hesitate to <a href="mailto:[email protected]" target="_blank" rel="noopener">reach out</a> if you have any question or comment!</p>
Kilian Cavalotti[email protected]urn:noticeable:publications:VhydQm59DPSiHOomcZcW2019-12-13T18:29:00.001Z2019-12-13T19:55:23.129ZSecure TensorBoard sessions with Sherlock OnDemandIf you're into machine learning (and who isn't these days?), you probably know all about TensorBoard already. If you don't, TensorBoard is TensorFlow's visualization toolkit. It provides the visualization and tooling needed for machine...<p>If you’re into machine learning (and who isn’t these days?), you probably know all about <a href="//www.tensorflow.org/tensorboard" target="_blank" rel="noopener">TensorBoard</a> already.</p>
<p>If you don’t, TensorBoard is <a href="//www.tensorflow.org" target="_blank" rel="noopener">TensorFlow</a>'s visualization toolkit. It provides the visualization and tooling needed for machine learning workflows: it enables tracking experiment metrics like loss and accuracy, visualizing model graphs, viewing histograms of weights, biases, or other tensors as they change over time, or profiling TensorFlow programs. All kinds of cool and useful stuff.</p>
<p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/VhydQm59DPSiHOomcZcW/01h55ta3gsyjxd2gfr2pg26psr-image.png" alt="tb.png"></p>
<h2>TensorBoard (lack of) security model</h2>
<p>But one thing that TensorBoard doesn’t do is user authentication and authorization: there is no notion of user session, credentials, nor access control in TensorBoard, and <a href="//github.com/tensorflow/tensorboard/issues/267" target="_blank" rel="noopener">no plan to implement any</a>.</p>
<p>What it means in practice, is that TensorBoard is a great solution if you’re developing and testing things on your own laptop, where you’re the only user. But on shared environments like HPC clusters in general and <a href="//www.sherlock.stanford.edu" target="_blank" rel="noopener">Sherlock</a> in particular, running a TensorBoard instance on a compute node means that any user on the cluster can connect to it, and interact with it, as if they were you: TensorBoard runs under your account, and through its unprotected web interface, it exposes your files and processes to any user that can connect to its web interface. There is no authentication mechanism.</p>
<p>Which is, for lack of better term, "not great".</p>
<h2>Your own private and secure TensorBoard, on demand</h2>
<p>Because it would be a shame to not be able to use such a valuable tool on our clusters, <a href="//srcc.stanford.edu" target="_blank" rel="noopener">we</a> came up with a solution to let users on Sherlock run TensorBoard in a secure and private way, without adding any additional configuration or access burden.</p>
<p>The <a href="//login.sherlock.stanford.edu/pun/sys/dashboard/batch_connect/sys/sh_tensorboard/" target="_blank" rel="noopener">TensorBoard OnDemand app</a>, which is accessible through the <a href="//www.sherlock.stanford.edu/docs/user-guide/ondemand/" target="_blank" rel="noopener">Sherlock OnDemand</a> portal, implements an authenticating reverse proxy that ensures that only the user who started the session can access it.</p>
<p>In a nutshell, by setting a browser cookie in the OnDemand interactive app page, we can make sure that the authenticating reverse proxy we developed and that controls access to the TensorBoard web interface, only authorize requests that come from the user authenticated through the OnDemand web page.</p>
<p>Without that cookie, access to the TensorBoard web interface is denied. And if the cookie is ever lost, users can simply re-create it by visiting the “My Interactive Sessions” page and clicking the “Connect” button again.</p>
<p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/VhydQm59DPSiHOomcZcW/01h55ta3gsys647ma04scdqt0j-image.png" alt="tbood.png"></p>
<p>And if you’re curious about the details, or if you’re not using Sherlock and would like to implement a similar solution at your site, our TensorBoard OnDemand app is available on <a href="//github.com/stanford-rc/sh_ood-apps/tree/master/sh_tensorboard/" target="_blank" rel="noopener">GitHub</a>.</p>
<h2>TL;DR</h2>
<p>TensorBoard sessions on Sherlock are secure and private, in a completely transparent way.</p>
<p>It’s a little thing, but we hope it can make working on Sherlock more secure, without putting any additional configuration burden on the users.</p>
<p>So happy experimenting in TensorBoard, and as usual, please don’t hesitate to <a href="[email protected]" target="_blank" rel="noopener">reach out</a> if you have any comments or questions.</p>
Kilian Cavalotti[email protected]urn:noticeable:publications:LWLl3sbP5hYZFMHSJqvS2019-11-05T20:00:00.001Z2019-11-05T20:23:02.817ZMore (and easier!) GPU scheduling optionsGPU scheduling is now easier and more powerful on Sherlock, with the addition of new job submission options especially targeted at GPU workloads. The most visible change is that you can now use the --gpus option when submitting jobs...<p>GPU scheduling is now easier and more powerful on Sherlock, with the addition of new job submission options especially targeted at GPU workloads.</p>
<p>The most visible change is that you can now use the <code>--gpus</code> option when submitting jobs, like this:</p>
<pre><code class="hljs language-shell"><span class="hljs-meta">$</span><span class="bash"> srun -p gpu --gpus=2 ...</span>
</code></pre>
<p>A number of additional submission options can now be used, such as:</p>
<ul>
<li><code>--cpus-per-gpu</code>, to request a number of CPUs per allocated GPU,</li>
<li><code>--gpus-per-node</code>, to request a given number of GPUs per node,</li>
<li><code>--gpus-per-task</code>, to request a number of GPUs per spawned task,</li>
<li><code>--mem-per-gpu</code>, to allocate a given amount of host memory per GPU.</li>
</ul>
<p>You can now also allocate a different number of GPUs per node on multi-node jobs, change the frequency of the GPUs allocated to your job and explicitly set task-to-GPU binding maps.</p>
<p>All of those options are detailed in the updated documentations at <a href="https://www.sherlock.stanford.edu/docs/user-guide/gpu/?utm_source=noticeable&utm_campaign=sherlock.more-and-easier-gpu-scheduling-options&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.LWLl3sbP5hYZFMHSJqvS&utm_medium=newspage" target="_blank" rel="noopener">https://www.sherlock.stanford.edu/docs/user-guide/gpu/</a> and a more complete description is available in the <a href="https://slurm.schedmd.com/srun.html?utm_source=noticeable&utm_campaign=sherlock.more-and-easier-gpu-scheduling-options&utm_content=publication+link&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.LWLl3sbP5hYZFMHSJqvS&utm_medium=newspage#OPT_gpus" target="_blank" rel="noopener">Slurm manual</a></p>
<p>Under the hood, the scheduler is now fully aware of the specifics of each GPU node, it knows how GPUs on the same node are inter-connected, and how they map to CPU sockets, and can select preferred GPUs for co-scheduling. It has all the information it needs to take optimal decisions about the placement of tasks within a job.</p>
<p>The end result? Better performance with less hassle for multi-GPU jobs.</p>
<p>So please take the new options for a spin, and <a href="[email protected]" target="_blank" rel="noopener">let us know</a> how they work for your jobs!</p>
Kilian Cavalotti[email protected]