Meta✴OpenAI, Microsoft and Oracle announced that they plan to integrate the latest AMD Instinct MI300X artificial intelligence accelerators into their systems. Industry leaders have made it clear that they are looking for alternatives to the expensive and scarce NVIDIA AI accelerators required to build and deploy AI platforms, including ChatGPT.
AMD’s powerful Instinct MI300X accelerators will ship early next year. If they prove suitable for technology companies and cloud service providers, it could reduce AI model development costs and put competitive pressure on NVIDIA, which has a significant share of this market. As AMD noted yesterday, the MI300X is based on the new CDNA3 architecture and is capable of very high performance. One of its standout features is 192GB of state-of-the-art HBM3 high-speed memory, ideal for large AI models.
The head of AMD, Dr. Lisa Su, compared the Instinct MI300X with one of the best accelerators on the market – NVIDIA H100. “This performance directly improves interaction [нейросетей] with the user. When you ask a model a question, you want her to respond faster, especially as the answers become more complex.” she said. The main question is whether customers using NVIDIA devices are willing to spend time and money implementing products from another vendor. AMD told investors and partners that it is expanding its ROCm software suite to compete directly with NVIDIA CUDA, which AI developers are already used to. Another important aspect is price: NVIDIA accelerators sell for $40,000. AMD has not yet announced pricing for the Instinct MI300X , but according to Lisa Su, its product is said to be cheaper to purchase and operate than its NVIDIA counterpart.
AMD said it had already signed contracts with some customers. Meta✴ plans to use the new accelerators in sticker generators, an AI image editor and an AI assistant. Microsoft CTO Kevin Scott said access to the AMD Instinct MI300X will be available via the Azure web service. The new chips will also be used by Oracle’s cloud infrastructure. OpenAI reported that it will use AMD chips in the Triton project – this is not a large language model like GPT, but a platform for research with access to relevant hardware features.
AMD planned revenue of $2 billion in the data center accelerator segment in 2024, but said the global AI chip market would grow to $400 billion over the next four years. And to be successful in this market, AMD doesn’t even have to beat NVIDIA, noted Dr. Su.