AMD interests at the technology conference MorganStanley was represented by the company’s CTO, Mark Papermaster, and at some point the conversation with the event host inevitably veered into the artificial intelligence mainstream. According to a company representative, AMD will promote both graphics architectures and CPUs for these needs.
Image source: AMD
“We are in AMD strives to find the right solution for every inference challenge.” Mark Papermaster briefly explained the principle. For dense and large language models, this is a GPU, such work is usually done at the hyperscaler level. AMD already delivers the AI engines it received after the Xilinx deal, it has a powerful AI engine with a well-optimized software stack. It is now available in Xilinx’s adaptive product line, but will permeate all AMD products over time.
At CES 2023, AMD CTO Lisa Su has already introduced Ryzen 7000 mobile processors with a built-in AI engine, AMD CEO Lisa Su continues. Today, as Papermaster explains, “The CPU is the workhorse for inference workloads”even if it is on a PC and not on a server system.
In fact, Genoa generation EPYC server processors received support for vector instructions and commands suitable for working with neural networks, as well as AVX-512 extensions.
Mark Papermaster has not neglected the specialized accelerators of the Instinct series. Microsoft is now successfully scaling its Azure cloud systems based on the MI250 series models. In the second half of this year, AMD promises the launch of the MI300 accelerators, which will combine the resources of twenty-four Zen 4 processor cores with the architecture of CDNA 3. Such accelerators are also well optimized for systems with artificial intelligence, according to AMD representatives. They will formally be delivered in the second half of this year, but will not appear in significant quantities until next year.
Add Comment