Nvidia doubles down on CoreWeave as early access to Vera Rubin and multibillion-dollar AI factory expansion plans

- Nvidia is investing money and hardware to accelerate the expansion of CoreWeave’s AI factory
- CoreWeave is gaining early access to Vera Rubin platforms across multiple data centers
- The financial support links Nvidia’s balance sheet directly to the growth of AI infrastructure
Nvidia and CoreWeave have expanded their long-standing partnership with an agreement that includes infrastructure deployments, major investments, and early access to future computing platforms.
The deal places CoreWeave among the first cloud providers expected to use Nvidia’s Vera Rubin generation, solidifying its role as a preferred partner for large-scale AI infrastructure.
Nvidia also contributed $2 billion to CoreWeave through an outright equity purchase, underscoring the financial depth of the partnership.
Scaling AI industries using the right infrastructure
The deal focuses on accelerating the construction of AI factories, with CoreWeave planning more than five gigawatts of power by 2030.
Nvidia’s involvement goes beyond supplying accelerators, as it supports the purchase of land, power and physical infrastructure.
This approach links funding directly to the hardware deployment timeline, showing how the proliferation of AI is increasingly dependent on the synergy between funding and computing delivery.
“AI is entering its next frontier and is driving the largest infrastructure development in human history,” said Jensen Huang, co-founder and CEO of Nvidia.
“CoreWeave’s deep AI factory technology, platform software, and unmatched execution speed are recognized across the industry. Together, we are racing to meet the incredible industrial demand for Nvidia AI — the foundation of the AI industrial revolution.”
Nvidia and CoreWeave are also deepening alignment across infrastructure and software layers.
CoreWeave’s cloud stack and performance tools will be tested and certified alongside Nvidia’s architecture index.
“From the beginning, our collaboration has been guided by a simple belief: AI succeeds when software, infrastructure, and operations are built together,” said Michael Intrator, co-founder, chairman, and CEO of CoreWeave.
CoreWeave is expected to deploy several generations of the Nvidia platform across its data centers, including early adoption of the Rubin platform, Vera CPUs, and BlueField storage systems.
This multi-generational strategy suggests that Nvidia is using CoreWeave as a proving ground for full-stack deployments rather than individual components.
Vera CPUs are expected to be offered as a standalone option, indicating Nvidia’s intention to address CPU bottlenecks that are becoming more apparent as AI workloads grow.
These CPUs use a custom Arm architecture with high core counts, large parallel memory capacity, and high bandwidth connectivity.
“For the first time, we will be offering Vera CPUs. Vera is an amazing CPU. We will offer Vera CPUs as an independent part of the infrastructure. And so you can not only use your computing stack on Nvidia GPUs, now you can also use your computing stack, wherever their CPU load is, running on Vera Edwards Revolution for Nvidianses completely,” said Vera Jewelry Revolution… X.
In practical terms, the collaboration reflects two issues shaping the current AI market.
Server CPUs appear as another pressure point in the supply chain, especially in agent-driven applications.
At the same time, offering high-end CPUs separately gives customers an alternative to full rack-scale systems, which can lower the cost of entry for some applications.
Follow TechRadar for Google news again add us as a favorite resource to get our expert news, reviews, and opinions in your feed. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok to get news, reviews, unboxings in video form, and get regular updates from us WhatsApp again.



