AMD currently has two GPU architectures on the market, while RDNA 2 is intended for gaming and is really a GPU in the truest sense of the word. CDNA 2 just like its predecessor is not for rendering graphics and it’s not only because it lacks a video output, but also because all the specialized graphics units have been removed.

CDNA architectures are a compute-optimized version of the older GCN architecture with improvements such as support for double precision or 64-bit floating point compute, typical of HPC GPUs, and the addition of Matrix Core Units, the equivalent of NVIDIA’s Tensor Cores which we assume will be ported to RDNA in a future iteration.

AMD’s first MCM GPUs are shipping now

AMD MCM GPU Shipping

The source is AMD itself that in the presentation of results of the second quarter of this year and through its set of slides has confirmed that the AMD Instinct MI200 are already being distributed and therefore are already fully completed.

The particularity of these “graphics” cards? We’ve known for some time that for the high-performance computing market there are plans by Intel, with its Xe-HP and Xe-HPC and NVIDIA, with its Hopper architecture, to launch a chiplet-based HPC GPU, but AMD has beaten them both to the punch with the launch of its Instinct MI200.

In any case we have to clarify one thing, HPC graphics cards do not suffer from the same problem as graphics cards where it is very difficult to have multiple GPUs rendering the same scene. So we are not facing a MCM GPU from AMD based on the same technology that we will see in RDNA 3. Although like the rest of contemporary HPC GPUs it is based on the use of HBM2E memory and therefore is a 2.5DIC configuration with chips and memory stacks on a common interposer.

See also  BenQ monitor for PC, PS5 or Xbox: 233 euros less and 240Hz!

With chiplets yes, but different from the future RDNA 3.

CoWoS Roadmap

Actually we don’t know if this AMD Instinct 200 makes use of the silicon bridge of a certain AMD patent or if it is the classic interposer. Although everything points to both GPUs being intercommunicated through an xGMI interface, a faster version of Infinity Fabric, which is AMD’s equivalent to NVIDIA’s NVLink. Each of the two 128 CUs GPUs that make up the MCM have 4 HBM2E memory stacks of 16GB each, making a 128GB memory configuration, the highest ever for a GPU of this type.

What’s more, AMD could be making use of the third generation CoWoS-S packaging, which was dated on TSMC’s roadmap for this year with 8 HBM stacks, which matches the AMD Instinct MI200 that AMD just released, although you won’t be able to buy it, as it’s only for certain ones.

What are the specifications of the AMD Instinct MI 200?

AMD Instinct MI200

The AMD Instinct MI 200 is a Dual HPC GPU, which is composed of two symmetric GPUs with 128 Compute Units each with CDNA 2 architecture. Each of the Compute Units is composed of 64 ALUs and has a tensor unit for matrix operations. A key unit type for algorithms based on convolutional neural networks. It is therefore a GPU with which AMD intends to compete against NVIDIA Tesla based on its A100 architecture.

The fact that it is a dual GPU puts it ahead of AMD’s solution in terms of raw power and is ahead of Intel and NVIDIA’s proposals with MCM GPUs for HPC, which are still undated at this time, while NVIDIA Hopper is not expected until at least next year. Will AMD be able to take market share from NVIDIA in HPC GPUs as it has done to Intel in server CPUs? Only time will tell, but they have a tough nut to crack in NVIDIA.

See also  Xiaomi Mi 10T Pro: the display doesn't convince DxOMark, here's why