By iwano@_84Posted on December 2, 2021 The Nationwide Renewable Electrical power Laboratory (NREL) on Wednesday declared its new supercomputer known as Kestrel. The new method will be constructed by Hewlett Packard Company (HPE) and will be run by Intel’s Xeon Scalable Sapphire Rapids processor as well as a mysterious “Nvidia A100Upcoming Tensor Main compute GPU.” The future Kestrel supercomputer will element HPE’s Cray Ex architecture (like most heterogeneous HPC machines now) and will give effectiveness of about 44 FP64 PetaFLOPS, which is in line with what the No. 7 most potent supercomputer in the environment supplies right now, and 75PB of storage. Starting off from early 2023, NREL’s Kestrel will accelerate accelerate energy efficiency and renewable electrical power study, the organization stated. The fact that NREL wants larger efficiency to make greater analysis is hardly surprising, but the alternative of components is somewhat unconventional. Whilst most supercomputers tend to decide on AMD’s EPYC processors for their outstanding core depend as opposed to their rivals from Intel, Kestrel utilizes Intel’s Sapphire Rapids. The method will also use Nvidia’s “A100Next Tensor Main GPUs to accelerate AI,” and this branding brings in more thoughts than answers. Commonly, a single would be expecting Nvidia to introduce a model-new compute GPU architecture by 2023 and the most recent rumors indicate that this architecture is called Hopper and it will use a single of TSMC’s N5 process systems. In the meantime, the title A100Following may signify a next-era compute GPU dependent on no matter what architecture comes future, or it may well point out that we may well be working with a different incarnation of Nvidia’s Ampere architecture. NextPlatform speculates that the A100Future could be “a die shrink to 5nm” with “more compute models, potentially doubling by way of a chiplet design and style.” In point, this appears to be like like a rumored multi-die Hopper-based style and design. Intel’s Sapphire Rapids CPU and Nvidia’s A100 GPU will operate flawlessly jointly by way of a PCIe bus, but they will not be able to successfully share memory pools since Intel’s processor does not guidance NVLink, whilst Nvidia’s A100 compute modules do not assistance the CXL protocol. Perhaps there will be some changes with the following-generation Nvidia compute GPU to handle this shortcoming, but we will have to wait and see. At this stage we do not know everything concrete about Nvidia’s H100 or A100Upcoming models, though CXL assist is quite likely. Nvidia will also be pleased to acquire a supercomputer contract even right before formally announcing its subsequent-era compute GPU architecture. We could speculate about Nvidia’s programs pertaining to its future-generation compute GPUs, but the organization knows how to continue to keep insider secrets and how to shock. Continue to, NREL’s utilization of the “A100Next” title is a instead odd way to refer Nvidia’s forthcoming compute GPU. Most likely Nvidia just didn’t want any individual explicitly stating it is really Hopper H100, conserving the facts for a later date. HARDWARE Tags: ComputeDecidedGPUNextGenNvidiasSupercomputer