Intel Xeon Remains Only Server CPU on MLPerf

May 27, 2025

Intel Xeon Remains Only Server CPU on MLPerf

Intel Xeon 6 with Performance-cores achieved an average 1.9x performance improvement over 5th Gen Xeon processors.

MLCommons released its latest MLPerf Inference v5.0 benchmarks, showcasing Intel® Xeon® 6 with Performance-cores (P-cores) across six key benchmarks. The results reveal a remarkable 1.9x boost in AI performance over the previous generation of processors, affirming Xeon 6 as a top solution for modern AI systems.

The latest MLPerf results demonstrate Intel Xeon 6 as the ideal CPU for AI workloads, offering a perfect balance of performance and energy efficiency. Intel Xeon remains the leading CPU for AI systems, with consistent gen-over-gen performance improvements across a variety of AI benchmarks.”  – said Karin Eibschitz Segal, Intel corporate vice president and interim general manager of the Data Center and AI Group.

Leadership in AI Workloads

As AI adoption accelerates, CPUs are essential for AI systems, serving as the host node to manage critical functions like data preprocessing, transmission and system orchestration. Intel continues to stand out as the only vendor to submit server CPU results to MLPerf.

In MLPerf Inference v5.0, Intel Xeon 6 with P-cores achieved an average 1.9x performance improvement over 5th Gen Intel® Xeon® processors in key benchmarks, including ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT. This reinforces Intel Xeon 6 as a preferred CPU for AI and highlights Xeon as a compelling alternative for smaller language models.

Intel has made significant strides in AI performance over the past four years. Since its first Xeon submission to MLPerf in 2021 with 3rd Gen Intel® Xeon® processors, Intel has seen a dramatic 15x performance improvement on ResNet50. Software optimization has further contributed to a 22% gain in GPT-J and an 11% gain in the 3D U-Net benchmark.

Trusted by Top OEMs

These new MLPerf results demonstrate Intel Xeon’s exceptional performance across solutions from original equipment manufacturers (OEMs) and ecosystem partners. As AI workloads become more integrated with enterprise systems, OEMs prioritize Xeon-based systems to ensure their customers achieve the best AI performance.Intel worked alongside four key OEM partners – Cisco, Dell Technologies, Quanta and Supermicro – that submitted results with Intel Xeon 6 with P-cores, showcasing diverse AI workloads and deployment capabilities.

Performance varies by use, configuration and other factors. Learn more at ww.Intel.com/PerformanceIndex.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. Visit MLCommons for more details. No product or component can be absolutely secure.

Disclaimer:The information contained in each press release posted on this site was factually accurate on the date it was issued. While these press releases and other materials remain on the Company's website, the Company assumes no duty to update the information to reflect subsequent developments. Consequently, readers of the press releases and other materials should not rely upon the information as current or accurate after their issuance dates.