Nvidia and Microsoft introduced a brand new machine form element these days at the Open Compute Summit for the Open Compute Project. Unlike the venerable ATX widespread, this one is designed for statistics facilities and aimed toward maximizing GPU performance as a part of Microsoft’s Project Olympus initiative.
According to Nvidia, HGX-1 is designed “to satisfy the exploding demand for AI computing inside the cloud — in fields including self reliant using, personalised healthcare, superhuman voice recognition, information and video analytics, and molecular simulations.”
Microsoft’s Project Olympus has been pulling in headlines, with hardware launches from Intel, AMD, Qualcomm, and now Nvidia as well. Each of those structures is intended to accelerate a selected form of workload or situation. According to Microsoft, Intel’s paintings with Project Olympus fielded aid for Skylake processors, with future versions predicted to add assist for FPGA accelerators or Intel Nervana answers. AMD answers are Naples-centric, as you would possibly expect, and Qualcomm’s recognition on its personal upcoming 48-core CPU.
Nvidia’s Project Olympus platform will p.C. 8 Pascal GPUs (GP100s) into a unmarried chassis, all related through NVLink, Microsoft said. Up to 32 GPUs can be supported with the aid of linking 4 HGX-1 systems together (it isn’t clean which fashionable is used to hyperlink the systems themselves).
As Patrick Moorhead, of Moor Insights & Strategy factors out, the ATX evaluation represents how ambitious Nvidia is being with this push. The rollout of ATX in 1995 among Microsoft and Intel gave the computer industry a unmarried, unified form aspect to design in opposition to. It helped set the stage for an era wherein machine components may be assumed to be like minded with system chassis without a doubt by means of conforming to the ATX widespread. If the HGX-1 trendy takes to the air in addition, future HPC GPUs or CPUs might be capable of take benefit of the same type of guarantees. The HGX-1 fashionable is designed to allow CPUs and GPUs to attach in whatever ratio suits the workload, all thru the NVLink interconnect.
Nvidia doesn’t mention which CPUs this effort is well matched with, and it’ll be exciting to see if we see any AMD-Nvidia group-usain this region inside the destiny. AMD has its very own server infrastructure and photographs division, however the RTG division within AMD operates a whole lot greater autonomously now than it did within the past, and Nvidia has a vastly large percentage of the HPC market than AMD does.
It would possibly make experience for AMD’s server CPU team and Nvidia’s HPC division to work together to enlarge the cloud computing marketplace, even though the companies are opponents in different enterprise segments. AMD announced its own AI products, dubbed Radeon Instinct, late remaining 12 months, however has yet to announce any essential hardware companions or machine designs.
No comments:
Post a Comment