New GPU options in the Sherlock catalog
timestamp1600452000001
Today, we’re introducing the latest generation of GPU accelerators in the Sherlock catalog: the NVIDIA A100 Tensor Core GPU.
Each A100 GPU features 9.7 TFlops of double-precision (FP64) performance, up to 312 TFlops for deep-learning applications, 40GB of HBM2 memory, and 600GB/s of interconnect bandwidth with 3rd generation NVLink connections[1].
New Sherlock Catalog options
Targeting the most demanding HPC and DL/AI workloads, the three new GPU node options we’re introducing today should cover the most extreme computing needs:
- a refreshed version of the
SH3_G4FP64.1
configuration features 32x CPU cores, 256GB of memory and 4x A100 PCIe GPUs - the new
SH3_G4TF64
model features 64 CPU cores, 512GB of RAM, and 4x A100 SXM4 GPUs (NVLink) - and the most powerful configuration,
SH3_G8TF64
, comes with 128 CPU cores, 1TB of RAM, 8x A100 SXM4 GPUs (NVLink) and two Infiniband HDR HCAs for a whopping 400Gb/s of interconnect bandwidth to keep those GPUs busy
You’ll find all the details in the Sherlock catalog (SUNet ID required).
All those configuration are available for order today, and can be ordered online though the Sherlock order form (SUNet ID required).
Other models’ availability
We’re working on bringing a replacement for the entry-level SH3_G4FP32
model back in the catalog as soon as possible. We’re unfortunately dependent on GPU availability, as well as on the adaptations required for server vendors to accommodate the latest generation of consumer-grade GPUs. We’re expecting a replacement configuration in the same price range to be available by the end of the calendar year.
As usual, please don’t hesitate to reach out if you have any questions!
In-depth technical details are available in the NVIDIA Developer blog ↩
Did you like this update?