Intel introduces GPU Max accelerators for data centers up
Hardware

Intel introduces GPU Max accelerators for data centers – up to 2.4x faster than NVIDIA A100

Intel has announced accelerators based on Data Center GPU Max GPUs. Formerly known under the code name Ponte Vecchio, they are now an official part of the Intel Max series products, which also include the already mentioned Xeon Sapphire Rapids server processors with integrated HBM2e memory, under the name Xeon Max.

    Image source: Intel

Image source: Intel

During a presentation at the SC22 event dedicated to server technologies and AI, Intel shared data on the performance of its new products. The Intel Data Center GPU Max accelerator has 128 Xe cores and 128 RT cores, making it the only server accelerator with native support for ray-tracing hardware acceleration. The company also claims up to 64MB of L1 cache and up to 408MB of L2 cache for them.

Intel Data Center GPU Max GPUs combine 100 million transistors on a single substrate in 47 chiplets built with different manufacturing processes (Intel 7, TSMC N5 and TSMC N7) and interconnected by EMIB interfaces and Foveros packaging technology.

The Intel Data Center GPU Max server accelerators will come in a variety of form factors designed for different tasks. PCI Express add-in card solutions are released as part of the Max 1100 series, offering 300W TDP, 56 Xe cores and 48GB of HBM2e memory. With special Intel Xe Link Bridges, up to four such accelerators can be combined into a cluster.

The Max 1350 accelerators will be available as OAM modules with a TDP of 450W. You get 112 Xe cores and 96GB of HBM2e memory. The high-end Max 1550 OAM solutions feature a 600W TDP, 128 Xe cores, and 128GB of HBM2e memory.

The company points out that the Xe-HPC architecture of the new computational accelerators allows you to combine up to eight OAM modules. Intel has provided data on the following configurations:

  • One OAM module: 128 GB HBM2e, 128 Xe cores, 600 W TDP, 52 TFLOPS performance, 3.2 TB/s memory bandwidth;
  • Two OAMs: 256 GB HBM2e, 256 Xe cores, 1200 W TDP, 104 TFLOPS performance, 6.4 TB/s memory bandwidth;
  • Four OAMs: 512 GB HBM2e, 512 Xe cores, 2400 W TDP, 208 TFLOPS performance, 12.8 TB/s memory bandwidth.

The manufacturer claims that each OAM module is up to 2x faster than a single NVIDIA A100 compute accelerator on ExaSMR OpenMC and miniBUDE tasks. Data Center GPU Max performs 1.5x faster than the competitor on ExaSMR NekRS tasks.


At Riskfuel, Intel Data Center GPU Max accelerators deliver 2.4x the performance of the competition.

Intel also recalled that the Rialto Bridge’s computational accelerators will inherit the Ponte Vecchio. They received up to 160 Xe cores and a new form factor OAM 2.0, which allows power consumption at the level of 800 watts.

About the author

Dylan Harris

Dylan Harris is fascinated by tests and reviews of computer hardware.

Add Comment

Click here to post a comment