urn:noticeable:projects:bYyIewUV308AvkMztxixSherlock changelogwww.sherlock.stanford.edu2023-04-27T19:00:07.260ZCopyright © SherlockNoticeablehttps://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/newspages/GtmOI32wuOUPBTrHaeki/01h55ta3gs1vmdhtqqtjmk7m4z-header-logo.pnghttps://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/newspages/GtmOI32wuOUPBTrHaeki/01h55ta3gs1vmdhtqqtjmk7m4z-header-logo.png#8c1515urn:noticeable:publications:fVC8v76vTKAPzyy0I0Lh2023-04-27T01:05:18.100Z2023-04-27T19:00:07.260ZInstant lightweight GPU instances are now availableWe know that getting access to GPUs on Sherlock can be difficult and feel a little frustrating at times. Which is why we are excited to announce the immediate availability of our new instant lightweight GPU instances!<p>We know that getting access to GPUs on Sherlock can be difficult and feel a little frustrating at times. Demand has been steadily growing, leading to long pending times, and waiting in line rarely feels great, especially when you have important work to do. </p><p>Which is why we are excited to announce the immediate availability of our latest addition to the Sherlock cluster: <strong>instant lightweight GPU instances</strong>! Every user can now get immediate access to a GPU instance, for a quick debugging session or to explore new ideas in a Notebook.<br><br>GPUs are the backbone of high-performance computing. They’ve become an integral component of the toolbox for many users, and are essential for deep learning, scientific simulations, and many other applications. But you don’t always need a full-fledged, top-of-the-line GPU for all your tasks. Sometimes all you want is to run a quick test to prototype an idea, debug a script, or explore new data in an interactive Notebook. For this, the new lightweight GPU instances on Sherlock will give you instant access to a GPU, without having to wait in line and compete with other jobs for resources you don’t need.<br><br>Sherlock’s instant lightweight GPU instances leverage NVIDIA’s <a href="https://www.nvidia.com/en-us/technologies/multi-instance-gpu/?utm_source=noticeable&amp;utm_campaign=sherlock.instant-lightweight-gpu-instances-are-now-available&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.fVC8v76vTKAPzyy0I0Lh&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank" title="NVIDIA Multi-Instance GPU">Multi-Instance GPU</a> (MIG) to provide multiple fully isolated GPU instances on the same physical GPU, each with their own high-bandwidth memory, cache, and compute cores. Those lightweight instances are ideal for small to medium-sized jobs, and lower the barrier to entry for all users<br><br>Similar to the interactive sessions available through the <code>dev</code> partition, Sherlock users can now request a GPU via the <code>sh_dev</code> command, and get immediate access with the following command:</p><pre><code>$ sh_dev -g 1</code></pre><p>For interactive apps in the <a href="https://www.sherlock.stanford.edu/docs/user-guide/ondemand/?utm_source=noticeable&amp;utm_campaign=sherlock.instant-lightweight-gpu-instances-are-now-available&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.fVC8v76vTKAPzyy0I0Lh&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Sherlock OnDemand docs">Sherlock OnDemand</a> interface, requesting a GPU in the <code>dev</code> partition will initiate an interactive session with access to a lightweight GPU instance.<br></p><figure><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/fVC8v76vTKAPzyy0I0Lh/01h55ta3gsgn6y7qksqsnbat6e-image.png" alt="" height="265.6474576271186" loading="lazy" title="" width="443.99999999999994"></figure><p></p><p><br>So now, everyone gets a GPU, no questions asked! 😁</p><p style="text-align: center;"></p><figure><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/fVC8v76vTKAPzyy0I0Lh/01h55ta3gsjw21hcsf0z70n9he-image.png" alt="" loading="lazy" title=""></figure><p></p><p><br>We hope those new instances will improve access to GPUs on Sherlock, enable a wider range of use cases, with all the flexibility and performance you need to get your work done, and lead to even more groundbreaking discoveries!</p><p>As always, thanks to all of our users for your continuous support and patience as we work to improve Sherlock, and if you have any question or comment, please don’t hesitate to reach out at <a href="mailto:[email protected]" rel="noopener" target="_blank">[email protected]</a>.<br></p>Kilian Cavalotti[email protected]urn:noticeable:publications:lWJ0NjSCycX68eP1aVpU2022-12-03T02:57:22.756Z2022-12-03T02:57:36.261ZClusterShell on SherlockEver wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running<p>Ever wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running on, all at once?<br><br>Enter <a href="https://cea-hpc.github.io/clustershell/?utm_source=noticeable&amp;utm_campaign=sherlock.clustershell-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.lWJ0NjSCycX68eP1aVpU&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank" title="ClusterShell">ClusterShell</a>, the best parallel shell application (and library!) of its kind.<br><br>With ClusterShell on Sherlock, you can quickly run a command on all the nodes your job is running on, to gather information about your applications and processes, in real time, and gather live output without having to wait for your job to end to see how it did. And with its tight integration with the job scheduler, no need to fiddle with manual node lists anymore, all it needs is a job id!<br><br>You allocated a few nodes in an interactive session and want to distribute some files on each node’s local storage devices? Check: ClusterShell has a <a href="https://clustershell.readthedocs.io/en/latest/tools/clush.html?utm_source=noticeable&amp;utm_campaign=sherlock.clustershell-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.lWJ0NjSCycX68eP1aVpU&amp;utm_medium=newspage#file-copying-mode" rel="noopener nofollow" target="_blank" title="File copy mode">copy mode</a> just for this.<br><br>Want to double-check that your processes are correctly laid out? Check: you can run a quick command to check the process tree across the nodes allocated to your job with:</p><pre><code>$ clush -w @job:$JOBID pstree -au $USER</code></pre><p>and verify that all your processes are running correctly.<br><br>You’ll find more details and examples in our Sherlock documentation, at <a href="https://www.sherlock.stanford.edu/docs/software/using/clustershell/?utm_source=noticeable&amp;utm_campaign=sherlock.clustershell-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.lWJ0NjSCycX68eP1aVpU&amp;utm_medium=newspage#local-storage" rel="noopener nofollow" target="_blank">https://www.sherlock.stanford.edu/docs/software/using/clustershell</a><br><br>Questions, ideas, or suggestions? Don’t hesitate to reach out to <a href="mailto:[email protected]" rel="noopener nofollow" target="_blank">[email protected]</a> to let us know!</p>Kilian Cavalotti[email protected]urn:noticeable:publications:pQO6ll118TRDHxHxfmj12020-09-18T18:00:00.001Z2020-09-18T22:53:40.922ZNew GPU options in the Sherlock catalogToday, we're introducing the latest generation of GPU accelerators in the Sherlock catalog: the NVIDIA A100 Tensor Core GPU. Each A100 GPU features 9.7 TFlops of double-precision (FP64) performance, up to 312 TFlops for deep-learning...<p>Today, we’re introducing the latest generation of GPU accelerators in the Sherlock catalog: the <a href="https://www.nvidia.com/en-us/data-center/a100/?utm_source=noticeable&amp;utm_campaign=sherlock.new-gpu-options-in-the-sherlock-catalog&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.pQO6ll118TRDHxHxfmj1&amp;utm_medium=newspage" target="_blank" rel="noopener">NVIDIA A100 Tensor Core GPU</a>.</p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/pQO6ll118TRDHxHxfmj1/01h55ta3gsm3jne0v3cpprnkn5-image.jpg" alt="ampere-a100.jpg"></p> <p>Each A100 GPU features <strong>9.7 TFlops</strong> of double-precision (FP64) performance, up to <strong>312 TFlops</strong> for deep-learning applications, <strong>40GB</strong> of HBM2 memory, and <strong>600GB/s</strong> of interconnect bandwidth with 3rd generation <strong>NVLink</strong> connections<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup>.</p> <h2>New Sherlock Catalog options</h2> <p>Targeting the most demanding HPC and DL/AI workloads, the three new GPU node options we’re introducing today should cover the most extreme computing needs:</p> <ul> <li>a refreshed version of the <code>SH3_G4FP64.1</code> configuration features 32x CPU cores, 256GB of memory and 4x A100 PCIe GPUs</li> <li>the new <code>SH3_G4TF64</code> model features 64 CPU cores, 512GB of RAM, and 4x A100 SXM4 GPUs (NVLink)</li> <li>and the most powerful configuration, <code>SH3_G8TF64</code> , comes with 128 CPU cores, 1TB of RAM, 8x A100 SXM4 GPUs (NVLink) and <em>two</em> Infiniband HDR HCAs for a whopping 400Gb/s of interconnect bandwidth to keep those GPUs busy</li> </ul> <p>You’ll find all the details in the <a href="http://www.sherlock.stanford.edu/docs/overview/orders/catalog?utm_source=noticeable&amp;utm_campaign=sherlock.new-gpu-options-in-the-sherlock-catalog&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.pQO6ll118TRDHxHxfmj1&amp;utm_medium=newspage" target="_blank" rel="noopener"><strong>Sherlock catalog</strong></a> <em>(SUNet ID required)</em>.</p> <p>All those configuration are available for order today, and can be ordered online though the Sherlock <a href="http://www.sherlock.stanford.edu/docs/overview/orders/form?utm_source=noticeable&amp;utm_campaign=sherlock.new-gpu-options-in-the-sherlock-catalog&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.pQO6ll118TRDHxHxfmj1&amp;utm_medium=newspage" target="_blank" rel="noopener">order form</a> <em>(SUNet ID required)</em>.</p> <h2>Other models’ availability</h2> <p>We’re working on bringing a replacement for the entry-level <code>SH3_G4FP32</code> model back in the catalog as soon as possible. We’re unfortunately dependent on GPU availability, as well as on the adaptations required for server vendors to accommodate the latest generation of consumer-grade GPUs. We’re expecting a replacement configuration in the same price range to be available by the end of the calendar year.</p> <p>As usual, please don’t hesitate to <a href="mailto:[email protected]" target="_blank" rel="noopener">reach out</a> if you have any questions!</p> <hr class="footnotes-sep"> <section class="footnotes"> <ol class="footnotes-list"> <li id="fn1" class="footnote-item"><p>In-depth technical details are available in the <a href="https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth?utm_source=noticeable&amp;utm_campaign=sherlock.new-gpu-options-in-the-sherlock-catalog&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.pQO6ll118TRDHxHxfmj1&amp;utm_medium=newspage" target="_blank" rel="noopener">NVIDIA Developer blog</a> <a href="#fnref1" class="footnote-backref">↩</a></p> </li> </ol> </section> Kilian Cavalotti[email protected]urn:noticeable:publications:BSt3Hu3ll00rfrTbrWEo2020-05-18T17:23:00.001Z2020-05-18T23:27:20.381ZNew Sherlock on-boarding sessionsOne of the most requested improvements around Sherlock services, that came out of our recent user survey, was for more documentation and more training. This is why, to help new users get familiar with Sherlock's computing environment...<p>One of the most requested improvements around Sherlock services, that came out of our recent user survey, was for more documentation and more training.</p> <p>This is why, to help new users get familiar with Sherlock’s computing environment, we’ll now be offering regular Sherlock on-boarding sessions, starting this Thursday, via Zoom:</p> <blockquote> <p><strong>Sherlock on-boarding session</strong><br> Thursday, May 21st, 11am<br> <a href="https://stanford.zoom.us/j/93659504122?utm_source=noticeable&amp;utm_campaign=sherlock.new-sherlock-on-boarding-sessions&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.BSt3Hu3ll00rfrTbrWEo&amp;utm_medium=newspage" target="_blank" rel="noopener">https://stanford.zoom.us/j/93659504122</a><br> <em>SUNetID required</em></p> </blockquote> <p>This one-hour introduction to Sherlock will present the cluster’s layout, the job scheduler and its limits, the different data storage possibilities, as well as some job submission and software installation best practices. So if you’re new to Sherlock or HPC in general, you’re welcome to join us to learn more!</p> <p>We will be offering these sessions on a regular basis, so look out for more announcements soon.</p> <p>On-boarding sessions will be focusing on new Sherlock users, but if you’re already a Sherlock user and have specific questions, always feel free to reach out to us at <a href="mailto:[email protected]" target="_blank" rel="noopener">[email protected]</a>, or to stop by during virtual office hours:</p> <ul> <li>Tuesdays, 10-11am: <a href="https://stanford.zoom.us/j/901884213?utm_source=noticeable&amp;utm_campaign=sherlock.new-sherlock-on-boarding-sessions&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.BSt3Hu3ll00rfrTbrWEo&amp;utm_medium=newspage" target="_blank" rel="noopener">https://stanford.zoom.us/j/901884213</a></li> <li>Thursdays, 3-4pm: <a href="https://stanford.zoom.us/j/681964418?utm_source=noticeable&amp;utm_campaign=sherlock.new-sherlock-on-boarding-sessions&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.BSt3Hu3ll00rfrTbrWEo&amp;utm_medium=newspage" target="_blank" rel="noopener">https://stanford.zoom.us/j/681964418</a></li> </ul> Kilian Cavalotti[email protected]urn:noticeable:publications:OYMZ9enkjo02jJ1V2vtK2019-12-03T23:30:00.001Z2019-12-04T19:28:02.638ZA newer, faster and better /scratchAs we just announced, Sherlock now features a brand new storage system for /scratch. But what was the old system, what does the new one look like, and how did the move happen? Read on to find out! The old Since its early days, Sherlock...<p>As <a href="https://news.sherlock.stanford.edu/posts/more-scratch-space-for-everyone?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-scratch&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.OYMZ9enkjo02jJ1V2vtK&amp;utm_medium=newspage" target="_blank" rel="noopener">we just announced</a>, Sherlock now features a brand new storage system for <code>/scratch</code>. But what was the old system, what does the new one look like, and how did the move happen? Read on to find out!</p> <h2>The old</h2> <p>Since its early days, Sherlock ran its <code>/scratch</code> filesystem on a storage system that was donated by <a href="//www.intel.com" target="_blank" rel="noopener">Intel</a> and <a href="//www.dell.com" target="_blank" rel="noopener">Dell</a>.</p> <p>Dubbed <em><em>Regal</em></em>, it was one of the key components of the Sherlock cluster when we started it in early 2014, with an initial footprint of about 100 compute nodes. Its very existence allowed us to scale the cluster to more than 1,500 nodes today, almost entirely through Faculty and PIs contributions to its condominium model. That’s a 15x growth in 5 years, and adoption has been spectacular.</p> <p>Regal was initially just over 1PB when it’s been deployed in May 2013, which was quite substantial at the time. And similarly to the compute part of the cluster, its modular design allowed us to expand it to over 3PB with contributions from individual research groups.</p> <p>We had a number of adventures with that system, including a <a href="https://news.sherlock.stanford.edu/posts/adventures-in-storage?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-scratch&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.OYMZ9enkjo02jJ1V2vtK&amp;utm_medium=newspage" target="_blank" rel="noopener">major scale disk replacement operation</a>, where we replaced about a petabyte of hard drives in production, while continuing to serve files to users ; or a literal drawer explosion in one of the disk arrays!</p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/OYMZ9enkjo02jJ1V2vtK/01h55ta3gsktzvgnjvd176d3yj-image.jpg" alt="kaboom"></p> <p>It’s been fun, and again, invaluable to our users.</p> <p>But time has come to retire it, and replace it with a newer, faster and better solution, to accommodate the ever-growing storage needs of Sherlock’s ever-growing community.</p> <h2>The new</h2> <p>This year, we stood up a completely new and separate <code>/scratch</code> filesystem for Sherlock.</p> <p>Nicknamed <em>Fir</em> (we like trees), this new storage system features:</p> <ul> <li>multiple metadata servers and faster metadata storage for better responsiveness with interactive operations,</li> <li>faster object storage servers,</li> <li>a faster backend interconnect, for lower latency operations across storage servers,</li> <li>more and faster storage routers to provide more bandwidth from Sherlock to <code>/scratch</code>,</li> <li>more space to share amongst all Sherlock users,</li> <li>a newer version of <a href="http://lustre.org?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-scratch&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.OYMZ9enkjo02jJ1V2vtK&amp;utm_medium=newspage" target="_blank" rel="noopener">Lustre</a> which provides: <ul> <li>improved client performance,</li> <li>dynamic file striping to automatically adapt file layout and I/O performance to match a file’s size</li> <li>and <a href="http://wiki.lustre.org/Lustre_2.12.0_Changelog?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-scratch&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.OYMZ9enkjo02jJ1V2vtK&amp;utm_medium=newspage" target="_blank" rel="noopener">much more</a>!</li> </ul></li> </ul> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/OYMZ9enkjo02jJ1V2vtK/01h55ta3gs7981c7jwnsydp3eq-image.jpg" alt="new"></p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/OYMZ9enkjo02jJ1V2vtK/01h55ta3gswz2f897qg6s6ha96-image.png" alt="logo-io500.png"> And not to brag, but Fir has been ranked #15 in the <a href="//www.vi4io.org/io500/list/19-11/10node" target="_blank" rel="noopener">IO-500 list of the fastest storage systems in the world</a>, in the 10-node challenge category, that was released at <a href="//sc19.supercomputing.org" target="_blank" rel="noopener">SC’19</a>. So yes, it’s decently fast.</p> <h2>The migration</h2> <p>Now, usually, when a new filesystem is made available on a computing system, there are two approaches:</p> <p>One is making the new system available under a new mount point (like <code>/scratch2</code>) and tell users: “here’s the new filesystem, the old one will go away soon, you have until next Monday to get your files there and update all your scripts.”<br> This usually results in a lot of I/O traffic going on at once from all the users rushing to copy their data to the new space, potential mistakes, confusion, and in the end, a lot of frustration, additional work and unnecessary stress on everyone. Not good.</p> <p>The other one is for sysadmins to copy all of the existing data from the old system to the new one in the background, in several passes, and then scheduled a (usually long) downtime to run a last synchronization pass and substitute the old filesystem by the new one under the same mount point (<code>/scratch</code>).<br> This also brings significant load on the filesystem while the synchronisation passes are running, taking I/O resources away from legitimate user jobs, it’s usually a very long process, and in the end it brings over old and abandoned files to the new storage system, wasting precious space. Not optimal either.</p> <p>So we decided to take another route, and devised a new scheme. We spent some time (and fun!) designing and developing a new kind of overlay layer, to bridge the gap between Regal and Fir, and to transparently migrate user data from one to the other.</p> <p>We (aptly) named this layer <code>migratefs</code> and open-sourced it at:<br> <a href="https://github.com/stanford-rc/fuse-migratefs?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-scratch&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.OYMZ9enkjo02jJ1V2vtK&amp;utm_medium=newspage" target="_blank" rel="noopener">https://github.com/stanford-rc/fuse-migratefs</a>.</p> <p><img src="https://docs.google.com/drawings/d/e/2PACX-1vT0p9txFKOVS9GazuZFIfolJp0ksmlXNlb0MsjyR_F3rPNtdXEe3ho25lpW55sNKk_NHmc0WyErQnCA/pub?w=484&amp;h=195" alt="migratefs"></p> <p>The main idea of <code>migratefs</code> is to take advantage of user activity to:</p> <ol> <li>distribute the data transfer tasks across all of the cluster nodes, to reduce the overall migration time,</li> <li>only migrate data that is actively in use, and leave older files that are never accessed nor modified on the old storage system, resulting in a new storage system that only stores relevant data,</li> <li>migrate all the user data transparently, without any downtime.</li> </ol> <p>So over the last few months, all of the active user data on Regal has been seamlessly migrated to Fir, without users having to modify any of their job scripts, and all without a downtime.</p> <p>Which is why if you’re using <code>$SCRATCH</code> or <code>$GROUP_SCRATCH</code> today, you are actively using the new storage system, and all your active data is there already, ready to be used in your compute jobs.</p> <h2>Next steps</h2> <p>Now, Regal has been emptied of all of its data and has been retired. It’s currently being un-racked to make room for future Sherlock developments.And stay tuned, because… <em>epic</em> changes are coming!</p> Kilian Cavalotti[email protected]urn:noticeable:publications:LWLl3sbP5hYZFMHSJqvS2019-11-05T20:00:00.001Z2019-11-05T20:23:02.817ZMore (and easier!) GPU scheduling optionsGPU scheduling is now easier and more powerful on Sherlock, with the addition of new job submission options especially targeted at GPU workloads. The most visible change is that you can now use the --gpus option when submitting jobs...<p>GPU scheduling is now easier and more powerful on Sherlock, with the addition of new job submission options especially targeted at GPU workloads.</p> <p>The most visible change is that you can now use the <code>--gpus</code> option when submitting jobs, like this:</p> <pre><code class="hljs language-shell"><span class="hljs-meta">$</span><span class="bash"> srun -p gpu --gpus=2 ...</span> </code></pre> <p>A number of additional submission options can now be used, such as:</p> <ul> <li><code>--cpus-per-gpu</code>, to request a number of CPUs per allocated GPU,</li> <li><code>--gpus-per-node</code>, to request a given number of GPUs per node,</li> <li><code>--gpus-per-task</code>, to request a number of GPUs per spawned task,</li> <li><code>--mem-per-gpu</code>, to allocate a given amount of host memory per GPU.</li> </ul> <p>You can now also allocate a different number of GPUs per node on multi-node jobs, change the frequency of the GPUs allocated to your job and explicitly set task-to-GPU binding maps.</p> <p>All of those options are detailed in the updated documentations at <a href="https://www.sherlock.stanford.edu/docs/user-guide/gpu/?utm_source=noticeable&amp;utm_campaign=sherlock.more-and-easier-gpu-scheduling-options&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.LWLl3sbP5hYZFMHSJqvS&amp;utm_medium=newspage" target="_blank" rel="noopener">https://www.sherlock.stanford.edu/docs/user-guide/gpu/</a> and a more complete description is available in the <a href="https://slurm.schedmd.com/srun.html?utm_source=noticeable&amp;utm_campaign=sherlock.more-and-easier-gpu-scheduling-options&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.LWLl3sbP5hYZFMHSJqvS&amp;utm_medium=newspage#OPT_gpus" target="_blank" rel="noopener">Slurm manual</a></p> <p>Under the hood, the scheduler is now fully aware of the specifics of each GPU node, it knows how GPUs on the same node are inter-connected, and how they map to CPU sockets, and can select preferred GPUs for co-scheduling. It has all the information it needs to take optimal decisions about the placement of tasks within a job.</p> <p>The end result? Better performance with less hassle for multi-GPU jobs.</p> <p>So please take the new options for a spin, and <a href="[email protected]" target="_blank" rel="noopener">let us know</a> how they work for your jobs!</p> Kilian Cavalotti[email protected]urn:noticeable:publications:Z7dki2n3MCcR1PgBRsG22019-05-03T22:36:00.001Z2019-05-03T22:51:38.070ZA better view at Sherlock's resourcesHow many jobs are running? What partitions do I have access to? How many CPUs can I use? Where should I submit my jobs? Any of those sound familiar? We know it's not always easy to navigate the native scheduler tools, their syntax, and...<p><em>How many jobs are running?</em><br> <em>What partitions do I have access to?</em><br> <em>How many CPUs can I use?</em><br> <em>Where should I submit my jobs?</em></p> <p>Any of those sound familiar?</p> <p>We know it’s not always easy to navigate the native scheduler tools, their syntax, and the gazillion options they provide.</p> <h2>Enter <code>sh_part</code></h2> <p>So today, we’re introducing <code>sh_part</code><sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup>, a new command on Sherlock, that will simplify navigating Sherlock’s partitions, and provide an user-focused, centralized view of its computing resources.</p> <p>To run it, simply type <code>sh_part</code> at the prompt on any login or compute node, and you’ll be greeted by something like this:</p> <pre><code class="hljs language-shell"><span class="hljs-meta">$</span><span class="bash"> sh_part</span> QUEUE FREE TOTAL FREE TOTAL RESORC OTHER MAXJOBTIME CORES NODE GRES PARTITION CORES CORES NODES NODES PENDNG PENDNG DAY-HR:MN PERNODE MEM-GB (COUNT) normal* 30 1600 0 76 2801 2278 7-00:00 20-24 128-191 - bigmem 0 88 0 2 90 1 1-00:00 32-56 512-3072 - dev 50 56 2 3 32 0 0-02:00 16-20 128 - gpu 62 140 0 7 121 0 7-00:00 16-24 191-256 gpu:8(1),gpu:4(6) </code></pre> <p>You’ll find a brief list of partitions you have access to, complete with information about the number of available nodes/cores and pending jobs.</p> <ul> <li>in the <code>QUEUE PARTITION</code> column, the <code>*</code> character indicates the default partition.</li> <li>the <code>RESOURCE PENDING</code> column shows the core count of pending jobs that are waiting on resources,</li> <li>the <code>OTHER PENDING</code> column lists core counts for jobs that are pending for other reasons, such as licenses, user, group or any other limit,</li> <li>the <code>GRES</code> column shows the number and type of GRES available in that partition, and the number of nodes that feature that specific GRES combination in paranteses. So for instance, in the output above, the <code>gpu</code> partition features ` node with 8 GPUs, and 6 nodes with 4 GPUs each.</li> </ul> <p>Hopefully <code>sh_part</code> will make it easier to figure out cluster activity, and allow users to get a better understanding of what’s running and what’s available in the various Sherlock partitions.</p> <p>As usual, if you have any question or comment, please don’t hesitate to reach out at <a href="mailto:[email protected]" target="_blank" rel="noopener">[email protected]</a>.</p> <hr class="footnotes-sep"> <section class="footnotes"> <ol class="footnotes-list"> <li id="fn1" class="footnote-item"><p><code>sh_part</code> is based on the <a href="https://github.com/mercanca/spart?utm_source=noticeable&amp;utm_campaign=sherlock.a-better-view-at-sherlock-s-resources&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.Z7dki2n3MCcR1PgBRsG2&amp;utm_medium=newspage" target="_blank" rel="noopener"><code>spart</code></a> tool, written by Ahmet Mercan. <a href="#fnref1" class="footnote-backref">↩</a></p> </li> </ol> </section> Kilian Cavalotti[email protected]urn:noticeable:publications:owMsDM8LWyLouZxv2R2P2019-02-16T00:35:00.001Z2019-02-16T01:11:09.700ZNew GPU node available on SherlockThere's a new GPU node in the gpu partition! It's notable for a list of reasons. This is the first node on Sherlock to feature both: the latest generation of Intel CPUs (Skylake), the latest generation of computing-optimized NVIDIA GPUs...<p>There’s a new GPU node in the <code>gpu</code> partition!</p> <p>It’s notable for a list of reasons. This is the first node on Sherlock to feature both:</p> <ol> <li>the latest generation of Intel CPUs (Skylake),</li> <li>the latest generation of computing-optimized NVIDIA GPUs,</li> </ol> <p>and it’s also the first node on Sherlock with 32GB GPUs, which is particularly interesting for a lot of Deep-Learning oriented workloads.</p> <h3>Specs</h3> <p>This compute node features:</p> <ul> <li>24x Intel Skylake CPU cores (2x <a href="https://ark.intel.com/products/120473/Intel-Xeon-Gold-5118-Processor-16-5M-Cache-2-30-GHz-?utm_source=noticeable&amp;utm_campaign=sherlock.new-gpu-node-available-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.owMsDM8LWyLouZxv2R2P&amp;utm_medium=newspage" target="_blank" rel="noopener">Xeon 5118</a>, 2.30GHz),</li> <li>192GB of memory (RAM),</li> <li>4x <a href="https://www.nvidia.com/en-us/data-center/tesla-v100/?utm_source=noticeable&amp;utm_campaign=sherlock.new-gpu-node-available-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.owMsDM8LWyLouZxv2R2P&amp;utm_medium=newspage" target="_blank" rel="noopener">NVIDIA Tesla V100 GPUs</a> with 32GB of GPU memory each.</li> </ul> <h3>Details</h3> <p>Nodes in the <code>gpu</code> partition are now available to everyone on Sherlock, and the new node can be requested by adding the following flag to your job submission options: <code>-C "GPU_SKU:V100_SXM2&amp;GPU_MEM:32GB"</code></p> <p>To request an interactive session on a Tesla V100 GPU with 32GB of memory, you can run:</p> <pre><code class="hljs language-shell"><span class="hljs-meta">$</span><span class="bash"> srun -p gpu --gres gpu:1 -C <span class="hljs-string">"GPU_SKU:V100_SXM2&amp;GPU_MEM:32GB"</span> --pty bash</span> </code></pre> <p>To see the list of all the available GPU features and characteristics that can be requested in the <code>gpu</code> partition:</p> <pre><code class="hljs language-shell"><span class="hljs-meta">$</span><span class="bash"> sh_node_feat -p gpu | grep GPU</span> GPU_BRD:GEFORCE GPU_BRD:TESLA GPU_CC:3.5 GPU_CC:3.7 GPU_CC:5.2 GPU_CC:6.0 GPU_CC:6.1 GPU_CC:7.0 GPU_GEN:KPL GPU_GEN:MXW GPU_GEN:PSC GPU_GEN:VLT GPU_MEM:12GB GPU_MEM:16GB GPU_MEM:24GB GPU_MEM:32GB GPU_MEM:6GB GPU_SKU:K20X GPU_SKU:K80 GPU_SKU:P100_PCIE GPU_SKU:P100_SXM2 GPU_SKU:P40 GPU_SKU:TITAN_BLACK GPU_SKU:TITAN_X GPU_SKU:TITAN_Xp GPU_SKU:V100_SXM2 </code></pre> <p>For more details about GPUs on Sherlock, see the <a href="https://www.sherlock.stanford.edu/docs/user-guide/gpu/?utm_source=noticeable&amp;utm_campaign=sherlock.new-gpu-node-available-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.owMsDM8LWyLouZxv2R2P&amp;utm_medium=newspage" target="_blank" rel="noopener">GPU user guide</a>.</p> <p>If you have any question, feel free to send us a note at <a href="mailto:[email protected]" target="_blank" rel="noopener">[email protected]</a>.</p> Kilian Cavalotti[email protected]urn:noticeable:publications:Iq3H6XsG1JOBf8jv6l4W2019-02-14T23:05:00.001Z2019-02-15T00:56:46.070ZA better way to check quotas on SherlockWe're very pleased to introduce a new way to check data usage on Sherlock, all from a single command, and using an hopefully simpler way to display information than before. Introducing sh_quota sh_quota, the new quota checking tool for...<p>We’re very pleased to introduce a new way to check data usage on Sherlock, all from a single command, and using an hopefully simpler way to display information than before.</p> <h3>Introducing <code>sh_quota</code></h3> <p><code>sh_quota</code>, the new quota checking tool for Sherlock, displays quota usage on the different Sherlock filesystems, using a familiar and consistent format:</p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/Iq3H6XsG1JOBf8jv6l4W/01h55ta3gsrd8p5jewb01b7my7-image.png" alt="sh_quota.png"></p> <p>It can be used to display usage on a single filesystem, or in the context of a different group, for users who are affiliated to multiple groups on Sherlock.</p> <p>You probably also noticed that quota information is now automatically displayed when you login onto Sherlock. This provides a quick and easy way to see if any of the limits is reached, and save some time in diagnosing errors later on, if your <code>$HOME</code> quota is exceeded, for instance.</p> <p>If you don’t want your quota information to be displayed at login, you can easily disable them by creating a <code>~/.sh_noquota</code> file in your <code>$HOME</code> directory:</p> <pre><code class="hljs language-shell"><span class="hljs-meta">$</span><span class="bash"> touch ~/.sh_noquota</span> </code></pre> <p>and the status information will be gone the next time you connect.</p> <p>For complete details and usage examples, please refer to the <a href="https://www.sherlock.stanford.edu/docs/storage/overview/?utm_source=noticeable&amp;utm_campaign=sherlock.a-better-way-to-check-quotas-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.Iq3H6XsG1JOBf8jv6l4W&amp;utm_medium=newspage#checking-quotas" target="_blank" rel="noopener">Checking Quotas</a> section of the <a href="https://www.sherlock.stanford.edu/docs/storage?utm_source=noticeable&amp;utm_campaign=sherlock.a-better-way-to-check-quotas-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.Iq3H6XsG1JOBf8jv6l4W&amp;utm_medium=newspage" target="_blank" rel="noopener">Sherlock storage documentation</a>.</p> Kilian Cavalotti[email protected]urn:noticeable:publications:TuNWR5Pb9wdESt911haR2018-11-22T01:00:00.001Z2018-11-22T01:02:18.079ZSherlock OnDemandToday, we're announcing Sherlock OnDemand, a brand new way to use the Sherlock cluster. Hot on the heels of the SC18 Supercomputing Conference, and right in time for the long Thanksgiving week-end, we thought that a good way to thank...<p><strong>Today, we’re announcing <a href="https://login.sherlock.stanford.edu?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Sherlock OnDemand</a>, a brand new way to use the <a href="https://www.sherlock.stanford.edu?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Sherlock cluster</a>.</strong></p> <p>Hot on the heels of the <a href="https://sc18.supercomputing.org?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">SC18 Supercomputing Conference</a>, and right in time for the long Thanksgiving week-end, <a href="https://srcc.stanford.edu?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">we</a> thought that a good way to thank our users, who, from grad students to Faculty members, have been showing their appreciation and unfaltering support over the years, would be to demonstrate our commitment to provide them with innovative and easier ways to use computing resources to support their research.</p> <p>This is why, after a long wait, we’re extremely pleased to announce the immediate availability of the new <a href="https://login.sherlock.stanford.edu?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Sherlock OnDemand</a> service.</p> <h2>A revolutionary way to work on Sherlock</h2> <p><strong>Sherlock OnDemand</strong> is a completely new way to interact with the computing and data storage resources provided on Sherlock.</p> <p>From the comfort of their web browser, users can now connect to Sherlock, compose, submit and monitor jobs, manage their files, and run interactive applications, such as <a href="https://www.jupyter.org?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Jupyter Notebooks</a>, <a href="https://www.rstudio.com/products/rstudio-server?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">RStudio</a>, or <a href="https://www.tensorflow.org/guide/summaries_and_tensorboard?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Tensorboard</a> sessions.</p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gs8g0y2ddfc0k35dgg-image.png" alt="ood_dash.png"></p> <h3>A Sherlock shell from your browser</h3> <p>Yes, that means that you can now connect to Sherlock from your web browser. No SSH client required. No need to mess around in <code>/.ssh/config</code> anymore, no more <a href="https://www.sherlock.stanford.edu/docs/advanced-topics/connection/?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage#gssapi" target="_blank">Kerberos</a> headaches, no more repeated two-step authentication confirmation either. Windows users rejoice!</p> <p>Just point your browser to the <a href="https://login.sherlock.stanford.edu/pun/sys/shell/ssh/login?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Sherlock OnDemand login URL</a> and shell away!</p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gsw4agvv6mjxeaxmkp-image.png" alt="ood_ssh.png"></p> <h3>So long WinSCP!</h3> <p>Ever dreamed about being able to browse your files on Sherlock in a graphical way, without having to install <a href="https://www.sherlock.stanford.edu/docs/storage/data-transfer/?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage#sftp-secure-file-transfer-protocol" target="_blank">additional programs</a>, or <a href="https://www.sherlock.stanford.edu/docs/storage/data-transfer/?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage#sshfs" target="_blank">mounting Sherlock’s filesystems</a> on your local machine (sometimes awkwardly, if possible at all)?</p> <p>Ever been frustrated that the <a href="https://www.sherlock.stanford.edu/docs/storage/data-transfer/?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage#globus" target="_blank">Globus</a> web interface didn’t offer a "<em>right click &gt; download</em>" option?</p> <p>Well, not only does <a href="https://login.sherlock.stanford.edu?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Sherlock OnDemand</a> allow all of this without the blink of an eye, but you can now view, edit, manipulate and transfer your files to (and from) Sherlock, from the comfort of your regular web browser.</p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gs70mxc6y0qr1986ks-image.png" alt="ood_fs.png"></p> <h3>All your jobs belong to Sherlock OnDemand.</h3> <p>Check out the queue, submit new jobs, cancel the ones you don’t like. All in one place, not a single shell command involved. Check this out!</p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gsake3bg691kapjtqh-image.png" alt="ood_job.png"></p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gs3hwqk6143xe2en1k-image.png" alt="ood_jobs.png"></p> <h3>Interactive apps</h3> <p>And🍒on top, interactive apps!</p> <table> <thead> <tr><th style="text-align:center"></th><th style="text-align:center"></th><th style="text-align:center"></th></tr> </thead> <tbody> <tr><td style="text-align:center"><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gsf12p8q93pxstvjsc-image.png" alt="jupyter.png"></td><td style="text-align:center"><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gsbjckwgzf50jfg0xb-image.png" alt="tb.png"></td><td style="text-align:center"><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gswg9497aeb0k6m9ac-image.png" alt="rstudio.png"></td></tr> <tr><td style="text-align:center"></td><td style="text-align:center"></td><td style="text-align:center"></td></tr> </tbody> </table> <p>You can now start Jupyter Notebooks or Rstudio directly from your web browser. No more SSH tunnel to configure or convoluted setup process. Choose the application of your choice in the list of available apps, fill out the form to tune your session to your computing needs, and submit. Your interactive app will be scheduled on a compute node, and you’ll be able to connect to it from the click of a button.</p> <p><img src="https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/publications/TuNWR5Pb9wdESt911haR/01h55ta3gsv84pkr8g08q0mwx1-image.png" alt="ood_apps.png"></p> <p><a href="https://www.jupyter.org?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Jupyter</a>, <a href="https://www.rstudio.com/products/rstudio-server?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">RStudio</a>, and <a href="https://www.tensorflow.org/guide/summaries_and_tensorboard?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">Tensorboard</a> are just the beginning! Stay tuned for more apps, coming soon to an Sherlock OnDemand browser tab near you.</p> <h2>Documentation</h2> <p>For complete details about Sherlock OnDemand, please see the <a href="https://www.sherlock.stanford.edu/docs/user-guide/ondemand/?utm_source=noticeable&amp;utm_campaign=sherlock.sherlock-on-demand&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.TuNWR5Pb9wdESt911haR&amp;utm_medium=newspage" target="_blank">documentation</a> we’ve prepared.</p> <p>And as usual, if you have any question, comment or suggestion, don’t hesitate to reach out at <a href="mailto:[email protected]" target="_blank">[email protected]</a>.</p> <p>Happy computing!</p> Kilian Cavalotti[email protected]