The new generation of Intel Meteor Lake chips will include not only the cores of the central processor and graphics subsystem, but also the NPU (Neural Processing Unit) coprocessor, introduced for the first time in this series, optimized for artificial intelligence algorithms.
The presence of NPUs in Intel Meteor Lake chips allows laptops to locally process AI-based software such as stable diffusion or language model-based chatbots without sacrificing performance or excessive power consumption. This subsystem has helped improve power efficiency in AI workloads by 8x. Intel has already demonstrated how this coprocessor works at Computex 2023, although it was still called VPU (Versatile Processing Unit) at the time.
The NPU is part of the SoC chiplet of Intel Meteor Lake processors and includes two Neural Compute Engines, each capable of handling its own workload or combining forces for a large task. Third-party software developers can use APIs that conform to industry standards, making it easier to implement the platform, Intel said. The NPU is a completely independent subsystem that appears in Windows Task Manager as a separate computing device along with the CPU and GPU.
Intel demonstrated Meteor Lake’s capabilities in working with AI algorithms using the example of the Stable Diffusion image generator. The integrated graphics processor completed the benchmark task in 14.5 seconds and consumed 37 W. The NPU processed the same task for a little longer – 20.7 s – but only required 10 W of power. It is also possible to combine resources – together the GPU and NPU completed the task in 11.3 s, and the power was lower than in the first case – 30 W.
Intel was no longer the first in this segment: on board the Apple M1 platform there is a Neural Engine subsystem for AI algorithms used in image processing and augmented reality software; Mobile chips from the AMD Ryzen 7000 “Phoenix” series also have a similar cluster – the manufacturer calls it Ryzen AI.