Copyright © Sherlockhttps://validator.w3.org/feed/docs/rss2.htmlSherlock changelogwww.sherlock.stanford.eduhttps://www.sherlock.stanford.edu?utm_source=noticeable&utm_campaign=sherlock&utm_content=other&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki&utm_medium=newspageenThu, 27 Apr 2023 19:00:07 GMThttps://noticeable.io[email protected] (Sherlock)[email protected] (Noticeable Team)https://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/newspages/GtmOI32wuOUPBTrHaeki/01h55ta3gs1vmdhtqqtjmk7m4z-header-logo.pngSherlock changeloghttps://www.sherlock.stanford.edu?utm_source=noticeable&utm_campaign=sherlock&utm_content=other&utm_id=bYyIewUV308AvkMztxix.GtmOI32wuOUPBTrHaeki&utm_medium=newspagehttps://storage.noticeable.io/projects/bYyIewUV308AvkMztxix/newspages/GtmOI32wuOUPBTrHaeki/01h55ta3gs1vmdhtqqtjmk7m4z-header-logo.png#8c1515fVC8v76vTKAPzyy0I0LhThu, 27 Apr 2023 01:05:18 GMT[email protected] (Kilian Cavalotti)Instant lightweight GPU instances are now availablehttps://news.sherlock.stanford.edu/publications/instant-lightweight-gpu-instances-are-now-availableNewHardwarelWJ0NjSCycX68eP1aVpUSat, 03 Dec 2022 02:57:22 GMT[email protected] (Kilian Cavalotti)ClusterShell on Sherlockhttps://news.sherlock.stanford.edu/publications/clustershell-on-sherlockEver wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running on, all at once?

Enter ClusterShell, the best parallel shell application (and library!) of its kind.

With ClusterShell on Sherlock, you can quickly run a command on all the nodes your job is running on, to gather information about your applications and processes, in real time, and gather live output without having to wait for your job to end to see how it did. And with its tight integration with the job scheduler, no need to fiddle with manual node lists anymore, all it needs is a job id!

You allocated a few nodes in an interactive session and want to distribute some files on each node’s local storage devices? Check: ClusterShell has a copy mode just for this.

Want to double-check that your processes are correctly laid out? Check: you can run a quick command to check the process tree across the nodes allocated to your job with:

$ clush -w @job:$JOBID pstree -au $USER

and verify that all your processes are running correctly.

You’ll find more details and examples in our Sherlock documentation, at https://www.sherlock.stanford.edu/docs/software/using/clustershell

Questions, ideas, or suggestions? Don’t hesitate to reach out to [email protected] to let us know!

]]>
Ever wondered how your jobs were doing while they were running? Keeping a eye on a log file is nice, but what if you could quickly gather process lists, usage metrics and other data points from all the nodes your multi-node jobs are running on, all at once?

Enter ClusterShell, the best parallel shell application (and library!) of its kind.

With ClusterShell on Sherlock, you can quickly run a command on all the nodes your job is running on, to gather information about your applications and processes, in real time, and gather live output without having to wait for your job to end to see how it did. And with its tight integration with the job scheduler, no need to fiddle with manual node lists anymore, all it needs is a job id!

You allocated a few nodes in an interactive session and want to distribute some files on each node’s local storage devices? Check: ClusterShell has a copy mode just for this.

Want to double-check that your processes are correctly laid out? Check: you can run a quick command to check the process tree across the nodes allocated to your job with:

$ clush -w @job:$JOBID pstree -au $USER

and verify that all your processes are running correctly.

You’ll find more details and examples in our Sherlock documentation, at https://www.sherlock.stanford.edu/docs/software/using/clustershell

Questions, ideas, or suggestions? Don’t hesitate to reach out to [email protected] to let us know!

]]>
SoftwareNew
pQO6ll118TRDHxHxfmj1Fri, 18 Sep 2020 18:00:00 GMT[email protected] (Kilian Cavalotti)New GPU options in the Sherlock cataloghttps://news.sherlock.stanford.edu/publications/new-gpu-options-in-the-sherlock-catalogNewHardwareBSt3Hu3ll00rfrTbrWEoMon, 18 May 2020 17:23:00 GMT[email protected] (Kilian Cavalotti)New Sherlock on-boarding sessionshttps://news.sherlock.stanford.edu/publications/new-sherlock-on-boarding-sessionsNewTrainingOYMZ9enkjo02jJ1V2vtKTue, 03 Dec 2019 23:30:00 GMT[email protected] (Kilian Cavalotti)A newer, faster and better /scratchhttps://news.sherlock.stanford.edu/publications/a-new-scratchDataHardwareNewLWLl3sbP5hYZFMHSJqvSTue, 05 Nov 2019 20:00:00 GMT[email protected] (Kilian Cavalotti)More (and easier!) GPU scheduling optionshttps://news.sherlock.stanford.edu/publications/more-and-easier-gpu-scheduling-optionsNewSchedulerImprovementZ7dki2n3MCcR1PgBRsG2Fri, 03 May 2019 22:36:00 GMT[email protected] (Kilian Cavalotti)A better view at Sherlock's resourceshttps://news.sherlock.stanford.edu/publications/a-better-view-at-sherlock-s-resourcesSchedulerImprovementNewowMsDM8LWyLouZxv2R2PSat, 16 Feb 2019 00:35:00 GMT[email protected] (Kilian Cavalotti)New GPU node available on Sherlockhttps://news.sherlock.stanford.edu/publications/new-gpu-node-available-on-sherlockNewHardwareIq3H6XsG1JOBf8jv6l4WThu, 14 Feb 2019 23:05:00 GMT[email protected] (Kilian Cavalotti)A better way to check quotas on Sherlockhttps://news.sherlock.stanford.edu/publications/a-better-way-to-check-quotas-on-sherlockNewImprovementTuNWR5Pb9wdESt911haRThu, 22 Nov 2018 01:00:00 GMT[email protected] (Kilian Cavalotti)Sherlock OnDemandhttps://news.sherlock.stanford.edu/publications/sherlock-on-demandNewAnnounce