Nvidia is staking a bigger claim in the future of large-scale AI, and it made that clear at the 2025 OCP Global Summit. In a keynote titled “Shaping the Future of Open Infrastructure for AI,” Ian Buck, the company’s vice president of hyperscale and high-performance computing, outlined how Nvidia is moving beyond accelerators to deliver a full, open, and scalable foundation for AI data centers.
A central theme of the talk was momentum around Ethernet-based AI networking. Meta and Oracle are adopting Nvidia’s Spectrum-X Ethernet to scale open AI infrastructure, signaling a significant shift in how hyperscalers plan to build and grow their AI factories. By leaning into Ethernet, organizations can tap into existing data center expertise and tooling, while benefiting from networking that’s tuned for the demands of modern AI workloads.
Why this matters now is simple: AI models keep getting bigger, and so do the clusters needed to train and deploy them. Networking is increasingly the bottleneck. Spectrum-X is designed to address that challenge with an Ethernet fabric optimized for AI traffic, aiming to reduce congestion, deliver predictable performance, and maintain high throughput as clusters expand. For companies building or expanding AI clouds, it offers a path to scale without abandoning the familiarity and flexibility of Ethernet.
Buck emphasized the importance of open, community-driven infrastructure, aligning closely with the OCP mission. The message: AI at scale requires not just faster chips, but interoperable systems, shared best practices, and reference designs that anyone can adopt and improve. Nvidia’s growing portfolio now spans compute, networking, software, and orchestration—an integrated stack intended to make it easier for cloud providers and enterprises to stand up efficient, high-performance AI environments.
For organizations planning their next phase of AI growth, the approach promises several practical benefits:
– Use of Ethernet to protect existing investments and streamline operations.
– Networking tailored for AI workloads to minimize hot spots and tail latency.
– An end-to-end platform that integrates compute, networking, and software for faster deployment.
– Open infrastructure principles that support choice, portability, and collaboration.
The OCP Global Summit has become a barometer for where data center design is headed, and this year’s focus on open AI infrastructure underscored an industry-wide push to balance performance with openness. Nvidia’s strategy fits squarely into that trajectory: deliver the building blocks for AI at massive scale while encouraging an ecosystem where those blocks can be assembled in flexible, cost-effective ways.
As AI adoption accelerates across industries—from cloud providers to enterprises—demand for predictable, scalable networking will only grow. By pairing advanced AI computing with Ethernet optimized for AI, Nvidia aims to remove barriers to scaling, simplify operations, and enable faster time to value for training and inference workloads alike.
The takeaway from Buck’s keynote is clear: the next era of AI won’t be defined by any single component. It will be built on open, interoperable infrastructure that brings together compute, networking, and software in a way that is easier to deploy and scale. With Meta and Oracle moving forward on Spectrum-X Ethernet, the signal from the hyperscale community is unmistakable—the future of AI infrastructure is open, Ethernet-driven, and ready to scale.






