urn:noticeable:projects:bYyIewUV308AvkMztxixSherlock changelogwww.sherlock.stanford.edu2023-03-25T01:16:33.972ZCopyright © SherlockNoticeablehttps://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/newspages/GtmOI32wuOUPBTrHaeki/01h55ta3gs1vmdhtqqtjmk7m4z-header-logo.pnghttps://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/newspages/GtmOI32wuOUPBTrHaeki/01h55ta3gs1vmdhtqqtjmk7m4z-header-logo.png#8c1515urn:noticeable:publications:SAz2fLkjN80X6CGoMnHX2023-03-25T01:04:21.451Z2023-03-25T01:16:33.972ZA new tool to help optimize job resource requirementsIt’s not always easy to determine the right amount of resources to request for a computing job. Making sure that the application will have enough resources to run properly, but avoiding over-requests that would make the jobs spend too much<p>It’s not always easy to determine the right amount of resources to request for a computing job. Making sure that the application will have enough resources to run properly, but avoiding over-requests that would make the jobs spend too much time waiting in queue for resources they won’t be using.<br><br>To help users inform those choices, we’ve just added a new tool to the <a href="https://www.sherlock.stanford.edu/docs/software/list/?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-tool-to-help-optimize-job-resource-requirements&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SAz2fLkjN80X6CGoMnHX&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Software list">module list</a> on Sherlock. <code><a href="https://github.com/JanneM/Ruse?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-tool-to-help-optimize-job-resource-requirements&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SAz2fLkjN80X6CGoMnHX&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Ruse project webpage">ruse</a></code> is command-line tool developed by <a href="https://github.com/JanneM?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-tool-to-help-optimize-job-resource-requirements&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SAz2fLkjN80X6CGoMnHX&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Jan Moren GitHub profile page">Jan Moren</a> which facilitates measuring processes’ resource usage. It periodically measures the resource use of a process and its sub-processes, and can help users find out how much resource to allocate to their jobs. It will determine the actual memory, execution time and cores that individual programs or MPI applications need to request in their job submission options.<br><br>You’ll find more information and some examples in the Sherlock documentation at <a href="https://www.sherlock.stanford.edu/docs/user-guide/running-jobs/?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-tool-to-help-optimize-job-resource-requirements&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SAz2fLkjN80X6CGoMnHX&amp;utm_medium=newspage#resource-requests" rel="noopener nofollow" target="_blank">https://www.sherlock.stanford.edu/docs/user-guide/running-jobs/#resource-requests</a> <br><br>Hopefully <code>ruse</code> will make it easier to write job resource requests , and allow users to get a better understanding of their applications’ behavior to take better advantage of Sherlock’s capabilities.</p><p>As usual, if you have any question or comment, please don’t hesitate to reach out at <a href="mailto:[email protected]" rel="noopener" target="_blank">[email protected]</a>.</p>Kilian Cavalotti[email protected]urn:noticeable:publications:0UxFZFimazxEAK4GjJJO2022-11-06T02:11:25.989Z2022-11-06T02:19:58.706ZJob #1, again!This is not the first time, we’ve been through this already (not so long ago, actually) but today, the Slurm job id counter was reset and went from job #67043327 back to job #1.<p>This is not the first time, we’ve been through this already (not so long ago, actually) but today, the Slurm job id counter was reset and went from job #67043327 back to job #1.</p><p></p><pre><code>JobID Partition Start ------------ ---------- ------------------- 67043327 normal 2022-11-05T10:18:32 1 normal 2022-11-05T10:18:32</code></pre><p>The largest job id that the scheduler can assign on Sherlock is 67,043,327. So when that number is reached, the next submitted job will be assigned job id #1.<br><br>This is the second time this job id reset happens in Sherlock’s history, since it debuted in 2014. The <a href="https://news.sherlock.stanford.edu/publications/job-1?utm_source=noticeable&amp;utm_campaign=sherlock.job-1-again&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.0UxFZFimazxEAK4GjJJO&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Job #1">first occurrence</a> happened on May 11th, 2020, just a little under 2.5 years ago. <br><br>It took about 6 years to submit the first 67 million jobs on Sherlock, but It’s incredible to realize that it took less than half that time to get to that staggering number of submitted jobs once again, and that it all happened since the beginning of the pandemic.<br><br>This is an humbling illustration of Sherlock’s central role and its importance to the Stanford research community, especially over the last few months. This give us once again the opportunity to thank each and every one of you, Sherlock users, for your continuous support, your extraordinary motivation and all of your patience and understanding when things break. We’ve never been so proud of supporting your amazing work, especially during those particularly trying times. Stay safe and happy computing!</p>Kilian Cavalotti[email protected]urn:noticeable:publications:SSx9LtIFOW9O3ULcMqGE2021-06-03T20:18:40.986Z2021-06-03T21:30:45.152ZA new interactive step in SlurmA new version of the sh_dev tool has been released, that leverages a recently-added Slurm feature. Slurm 20.11 introduced a new“interactive step”, designed to be used with salloc to automatically launch a terminal on an allocated compute<blockquote><p>A new version of the <code>sh_dev</code> tool has been released, that leverages a recently-added <a href="https://slurm.schedmd.com/?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-interactive-step-in-slurm&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SSx9LtIFOW9O3ULcMqGE&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank" title="Slurm">Slurm</a> feature.</p></blockquote><p>Slurm 20.11 introduced a new <a href="https://github.com/SchedMD/slurm/blob/slurm-20-11-0-1/RELEASE_NOTES?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-interactive-step-in-slurm&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SSx9LtIFOW9O3ULcMqGE&amp;utm_medium=newspage#L72-L74" rel="noopener nofollow" target="_blank">“interactive step”</a> , designed to be used with <code><a href="https://slurm.schedmd.com/salloc.html?utm_source=noticeable&amp;utm_campaign=sherlock.a-new-interactive-step-in-slurm&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.SSx9LtIFOW9O3ULcMqGE&amp;utm_medium=newspage" rel="noopener nofollow" target="_blank">salloc</a></code> to automatically launch a terminal on an allocated compute node. This new type of job step resolves a number of problems with the previous interactive job approaches, both in terms of accounting and resource allocation.</p><h2>What is this about?</h2><p>In previous versions, launching an interactive job with <code>srun --pty bash</code> would create a step 0, that was consuming resources, especially Generic Resources (GRES, <em>ie.</em> GPUs). Among other things, it made it impossible to use <code>srun</code> within that allocation to launch subsequent steps. Any attempt would result in a “step creation temporarily disabled” error message.<br><br>Now, with this new feature, you can use <code>salloc</code> to directly open a shell on a compute node. The new interactive step won’t consume any of the allocated resources, so you’ll be able to start additional steps with <code>srun</code> within your allocation. <br><br><code>sh_dev</code> (<em>aka</em> <code>sdev</code>) has been updated to use interactive steps.</p><h2>What changes?</h2><h3>For <code>sh_dev</code></h3><p>On the surface, nothing changes: you can continue to use <code>sh_dev</code> exactly like before, to start an interactive session on one of the compute nodes dedicated to that task (the default), or on a node in any partition (which is particularly popular among node owners). You’ll be able to use the same options, with the same features (including X11 forwarding). <br>Under the hood, though, you’ll be leveraging the new interactive step automatically.</p><h3>For <code>salloc</code></h3><p>If you use <code>salloc</code> on a regular basis, the main change is that the resulting shell will open on the first allocated node, instead of the node you ran <code>salloc</code> on:</p><pre><code>[kilian@sh01-ln01 login ~]$ salloc salloc: job 25753490 has been allocated resources salloc: Granted job allocation 25753490 salloc: Nodes sh02-01n46 are ready for job [kilian@sh02-01n46 ~] (job 25753490) $ </code></pre><p>If you want to keep that initial shell on the submission host, you can simply specify a command as an argument, and the resulting command will continue to be executed as the calling user on the calling host:</p><pre><code>[kilian@sh01-ln01 login ~]$ salloc bash salloc: job 25752889 has been allocated resources salloc: Granted job allocation 25752889 salloc: Nodes sh02-01n46 are ready for job [kilian@sh01-ln01 login ~] (job 25752889) $</code></pre><h3>For <code>srun</code></h3><p>If you’re used to run <code>srun —pty bash</code> to get a shell on a compute node, you can continue to do so (as long as you don’t intend to run additional steps within the allocation). </p><p>But you can also just type <code>salloc</code>, get a more usable shell, and save 60% in keystrokes!</p><p></p><hr><p>Happy computing! And as usual, please feel free to&nbsp;<a href="mailto:[email protected]" rel="noopener" target="_blank">reach out</a>&nbsp;if you have comments or questions.</p>Kilian Cavalotti[email protected]urn:noticeable:publications:S2oaJqRSEdqtp6VICvO62020-05-11T20:09:00.001Z2022-11-06T01:50:25.320ZJob #1If you've been submitting jobs on Sherlock over the last couple days, you probably noticed something different about your your job ids... They lost a couple digits! If you submitted a job last week, its job id was likely in the 67,000...<p>If you’ve been submitting jobs on Sherlock over the last couple days, you probably noticed something different about your your job ids… They lost a couple digits!</p><p>If you submitted a job last week, its job id was likely in the 67,000,000s. Today, it’s back in the 100,000s. What happened? Did we reset anything? Did we start simplifying job ids because there were too many numbers to keep track of?</p><p>Not really.</p><p>It’s just that so many jobs are submitted to Sherlock these days (and even more so since the beginning of the stay-at-home directives), that we reached the maximum job id that the scheduler can use.</p><p>Those job ids are roughly 26 bits in length, with a little headroom for special cases, and the largest job id that the scheduler can assign on Sherlock is 67,043,327. It means that when that number is reached, the next submitted job will be assigned jobid #1.</p><p>Both were submitted on Friday night, and started running Saturday morning:</p><pre><code> JobID Partition Submit Start ------------ ---------- ------------------- ------------------- 1 normal 2020-05-08T22:21:28 2020-05-09T06:04:50 67043327 normal 2020-05-08T22:21:28 2020-05-09T06:05:16 </code></pre><p>A few months ago, we <a href="https://news.sherlock.stanford.edu/posts/job-50-000-000?utm_source=noticeable&amp;utm_campaign=sherlock.job-1&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.S2oaJqRSEdqtp6VICvO6&amp;utm_medium=newspage" rel="noopener" target="_blank">celebrated job #50,000,000</a>. Today, we’re celebrating job #1, the beginning of a new cycle. :)</p><p>Thanks to each and every one of you, Sherlock users, for your continuous support, your extraordinary motivation and all of your patience and understanding when things break. We’ve never been so proud of supporting your amazing work, especially during those particularly trying times. Stay safe and happy computing!</p>Kilian Cavalotti[email protected]urn:noticeable:publications:LWLl3sbP5hYZFMHSJqvS2019-11-05T20:00:00.001Z2019-11-05T20:23:02.817ZMore (and easier!) GPU scheduling optionsGPU scheduling is now easier and more powerful on Sherlock, with the addition of new job submission options especially targeted at GPU workloads. The most visible change is that you can now use the --gpus option when submitting jobs...<p>GPU scheduling is now easier and more powerful on Sherlock, with the addition of new job submission options especially targeted at GPU workloads.</p> <p>The most visible change is that you can now use the <code>--gpus</code> option when submitting jobs, like this:</p> <pre><code class="hljs language-shell"><span class="hljs-meta">$</span><span class="bash"> srun -p gpu --gpus=2 ...</span> </code></pre> <p>A number of additional submission options can now be used, such as:</p> <ul> <li><code>--cpus-per-gpu</code>, to request a number of CPUs per allocated GPU,</li> <li><code>--gpus-per-node</code>, to request a given number of GPUs per node,</li> <li><code>--gpus-per-task</code>, to request a number of GPUs per spawned task,</li> <li><code>--mem-per-gpu</code>, to allocate a given amount of host memory per GPU.</li> </ul> <p>You can now also allocate a different number of GPUs per node on multi-node jobs, change the frequency of the GPUs allocated to your job and explicitly set task-to-GPU binding maps.</p> <p>All of those options are detailed in the updated documentations at <a href="https://www.sherlock.stanford.edu/docs/user-guide/gpu/?utm_source=noticeable&amp;utm_campaign=sherlock.more-and-easier-gpu-scheduling-options&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.LWLl3sbP5hYZFMHSJqvS&amp;utm_medium=newspage" target="_blank" rel="noopener">https://www.sherlock.stanford.edu/docs/user-guide/gpu/</a> and a more complete description is available in the <a href="https://slurm.schedmd.com/srun.html?utm_source=noticeable&amp;utm_campaign=sherlock.more-and-easier-gpu-scheduling-options&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.LWLl3sbP5hYZFMHSJqvS&amp;utm_medium=newspage#OPT_gpus" target="_blank" rel="noopener">Slurm manual</a></p> <p>Under the hood, the scheduler is now fully aware of the specifics of each GPU node, it knows how GPUs on the same node are inter-connected, and how they map to CPU sockets, and can select preferred GPUs for co-scheduling. It has all the information it needs to take optimal decisions about the placement of tasks within a job.</p> <p>The end result? Better performance with less hassle for multi-GPU jobs.</p> <p>So please take the new options for a spin, and <a href="[email protected]" target="_blank" rel="noopener">let us know</a> how they work for your jobs!</p> Kilian Cavalotti[email protected]urn:noticeable:publications:Z7dki2n3MCcR1PgBRsG22019-05-03T22:36:00.001Z2019-05-03T22:51:38.070ZA better view at Sherlock's resourcesHow many jobs are running? What partitions do I have access to? How many CPUs can I use? Where should I submit my jobs? Any of those sound familiar? We know it's not always easy to navigate the native scheduler tools, their syntax, and...<p><em>How many jobs are running?</em><br> <em>What partitions do I have access to?</em><br> <em>How many CPUs can I use?</em><br> <em>Where should I submit my jobs?</em></p> <p>Any of those sound familiar?</p> <p>We know it’s not always easy to navigate the native scheduler tools, their syntax, and the gazillion options they provide.</p> <h2>Enter <code>sh_part</code></h2> <p>So today, we’re introducing <code>sh_part</code><sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup>, a new command on Sherlock, that will simplify navigating Sherlock’s partitions, and provide an user-focused, centralized view of its computing resources.</p> <p>To run it, simply type <code>sh_part</code> at the prompt on any login or compute node, and you’ll be greeted by something like this:</p> <pre><code class="hljs language-shell"><span class="hljs-meta">$</span><span class="bash"> sh_part</span> QUEUE FREE TOTAL FREE TOTAL RESORC OTHER MAXJOBTIME CORES NODE GRES PARTITION CORES CORES NODES NODES PENDNG PENDNG DAY-HR:MN PERNODE MEM-GB (COUNT) normal* 30 1600 0 76 2801 2278 7-00:00 20-24 128-191 - bigmem 0 88 0 2 90 1 1-00:00 32-56 512-3072 - dev 50 56 2 3 32 0 0-02:00 16-20 128 - gpu 62 140 0 7 121 0 7-00:00 16-24 191-256 gpu:8(1),gpu:4(6) </code></pre> <p>You’ll find a brief list of partitions you have access to, complete with information about the number of available nodes/cores and pending jobs.</p> <ul> <li>in the <code>QUEUE PARTITION</code> column, the <code>*</code> character indicates the default partition.</li> <li>the <code>RESOURCE PENDING</code> column shows the core count of pending jobs that are waiting on resources,</li> <li>the <code>OTHER PENDING</code> column lists core counts for jobs that are pending for other reasons, such as licenses, user, group or any other limit,</li> <li>the <code>GRES</code> column shows the number and type of GRES available in that partition, and the number of nodes that feature that specific GRES combination in paranteses. So for instance, in the output above, the <code>gpu</code> partition features ` node with 8 GPUs, and 6 nodes with 4 GPUs each.</li> </ul> <p>Hopefully <code>sh_part</code> will make it easier to figure out cluster activity, and allow users to get a better understanding of what’s running and what’s available in the various Sherlock partitions.</p> <p>As usual, if you have any question or comment, please don’t hesitate to reach out at <a href="mailto:[email protected]" target="_blank" rel="noopener">[email protected]</a>.</p> <hr class="footnotes-sep"> <section class="footnotes"> <ol class="footnotes-list"> <li id="fn1" class="footnote-item"><p><code>sh_part</code> is based on the <a href="https://github.com/mercanca/spart?utm_source=noticeable&amp;utm_campaign=sherlock.a-better-view-at-sherlock-s-resources&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.Z7dki2n3MCcR1PgBRsG2&amp;utm_medium=newspage" target="_blank" rel="noopener"><code>spart</code></a> tool, written by Ahmet Mercan. <a href="#fnref1" class="footnote-backref">↩</a></p> </li> </ol> </section> Kilian Cavalotti[email protected]urn:noticeable:publications:v7Gg5vpNIANw5U1jUsdF2018-11-05T22:36:00.001Z2018-11-05T22:45:32.215ZPersistent processes on SherlockThere are many cases where having a persistent process (or service) running alongside computing jobs is of great benefit. For instance, when data is stored in a database format to allow for highly efficient queries, jobs that want to...<p>There are many cases where having a persistent process (or service) running alongside computing jobs is of great benefit. For instance, when data is stored in a database format to allow for highly efficient queries, jobs that want to compute against this data will need to retrieve it from a live database instance.</p> <p>We’re providing instructions and examples in the <a href="https://www.sherlock.stanford.edu/docs/?utm_source=noticeable&amp;utm_campaign=sherlock.persistent-processes-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.v7Gg5vpNIANw5U1jUsdF&amp;utm_medium=newspage" target="_blank">Sherlock documentation</a> on how to run database server instances in the context of a job (with <a href="https://www.sherlock.stanford.edu/docs/software/using/mariadb?utm_source=noticeable&amp;utm_campaign=sherlock.persistent-processes-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.v7Gg5vpNIANw5U1jUsdF&amp;utm_medium=newspage" target="_blank">MariaDB</a> and <a href="https://www.sherlock.stanford.edu/docs/software/using/postgresql?utm_source=noticeable&amp;utm_campaign=sherlock.persistent-processes-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.v7Gg5vpNIANw5U1jUsdF&amp;utm_medium=newspage" target="_blank">PostgreSQL</a>), that is, subject to the regular execution time limits of a scheduled job. But what if you need those instances to run all the time?</p> <p>Now, with <a href="https://www.sherlock.stanford.edu/docs/user-guide/running-jobs/?utm_source=noticeable&amp;utm_campaign=sherlock.persistent-processes-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.v7Gg5vpNIANw5U1jUsdF&amp;utm_medium=newspage#persistent-jobs" target="_blank">persistent jobs</a>, users can submit jobs on Sherlock that will resubmit themselves when they reach their time limit, and can also conserve their <code>$JOBID</code> across re-submissions. This will make specifying job dependencies much easier, since the persistent job’s id will never change.</p> <p>For more details, and examples, please take a look at the <a href="https://www.sherlock.stanford.edu/docs/user-guide/running-jobs/?utm_source=noticeable&amp;utm_campaign=sherlock.persistent-processes-on-sherlock&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.v7Gg5vpNIANw5U1jUsdF&amp;utm_medium=newspage#persistent-jobs" target="_blank">persistent jobs documentation</a> and <a href="mailto:[email protected]" target="_blank">let us know</a> if you have any question.</p> Kilian Cavalotti[email protected]urn:noticeable:publications:GbpQey4tQnPIwPr9Lyvx2018-09-18T21:53:00.001Z2018-09-19T00:08:49.457ZBetter error messages when submitting jobsSherlock now offers a better and more complete explanation when a job submission is rejected by the scheduler. What does it look like? In the most common cases, jobs that don't meet the requirements for the partition they're submitted...<p>Sherlock now offers a better and more complete explanation when a job submission is rejected by the scheduler.</p> <h2>What does it look like?</h2> <p>In the most common cases, jobs that don’t meet the requirements for the partition they’re submitted to, will display a more detailed message.</p> <p>For instance, submitting a job to the <code>gpu</code> partition without <a href="https://www.sherlock.stanford.edu/docs/user-guide/gpu?utm_source=noticeable&amp;utm_campaign=sherlock.better-error-messages-when-submitting-jobs&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.GbpQey4tQnPIwPr9Lyvx&amp;utm_medium=newspage" target="_blank">requesting a GPU</a> will look like this:</p> <pre><code class="hljs language-bash">$ srun -p gpu --pty bash srun: error: ============================================================================= ERROR: missing GPU request, job not submitted ============================================================================= Jobs submitted to the gpu partition must explicitly request GPUs, by using the --gres option. ----------------------------------------------------------------------------- srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit <span class="hljs-built_in">limit</span>, user<span class="hljs-string">'s size and/or time limits) </span></code></pre> <p>We hope this will make things easier when a job is rejected at submission time, and help clarify some of the errors sent back by the scheduler.</p> <p>Don’t hesitate to <a href="[email protected]" target="_blank">contact us</a> if you have any feedback or suggestion.</p> Kilian Cavalotti[email protected]urn:noticeable:publications:RNWO2vpL3CcIFrSyXEW82018-08-01T23:32:00.001Z2018-09-09T20:46:50.777ZHigh priority QOS for ownersToday, we're introducing a new high-priority QOS for owner partitions. In groups which have purchased their own compute nodes on Sherlock, users can now submit jobs to their group partition using the --qos=high_p option: it will give...<p>Today, we're introducing a new high-priority <a href="https://www.sherlock.stanford.edu/docs/overview/glossary/?utm_source=noticeable&amp;utm_campaign=sherlock.high-priority-qos&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.RNWO2vpL3CcIFrSyXEW8&amp;utm_medium=newspage#qos" target="_blank">QOS</a> for owner partitions.</p> <p>In groups which have <a href="https://www.sherlock.stanford.edu/docs/overview/concepts/?utm_source=noticeable&amp;utm_campaign=sherlock.high-priority-qos&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.RNWO2vpL3CcIFrSyXEW8&amp;utm_medium=newspage#the-condominium-model" target="_blank">purchased their own compute nodes</a> on Sherlock, users can now submit jobs to their group partition using the <code>--qos=high_p</code> option: it will give those jobs a priority boost, and will allow for distinguishing between two classes of jobs within the same partition.</p> <p>This new QOS could be useful to prioritize specific jobs that need to get executed before others, to meet deadlines or differentiate between background and higher-priority jobs.</p> <p>If your group is not an owner on Sherlock yet, but if you're interested in becoming one, please take a look at the <a href="https://www.sherlock.stanford.edu/?utm_source=noticeable&amp;utm_campaign=sherlock.high-priority-qos&amp;utm_content=publication+link&amp;utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki.RNWO2vpL3CcIFrSyXEW8&amp;utm_medium=newspage#own" target="_blank">Sherlock website</a> and <a href="[email protected]" target="_blank">let us know</a> if you have any question.</p> Kilian Cavalotti[email protected]