SP
BravenNow
Nvidia’s new AI system Vera Rubin is 10 times more efficient than its predecessor — here’s a first look
| USA | general | ✓ Verified - cnbc.com

Nvidia’s new AI system Vera Rubin is 10 times more efficient than its predecessor — here’s a first look

#Nvidia Vera Rubin #AI system efficiency #Performance per watt #Modular design #Liquid cooling #AI competition #Data centers #Supply chain

📌 Key Takeaways

  • Vera Rubin delivers 10 times more performance per watt than its predecessor
  • The system features innovative modular design with 1.3 million components
  • Nvidia faces increasing competition from AMD and custom silicon from major tech companies
  • Major customers including Meta, OpenAI, Amazon, Google and Microsoft have committed to adopting Vera Rubin

📖 Full Retelling

Nvidia CEO Jensen Huang unveiled the company's next-generation AI system Vera Rubin at its Santa Clara, California headquarters on February 13, 2026, revealing a technological marvel that delivers 10 times more performance per watt than its predecessor Grace Blackwell. CNBC received an exclusive first look at the complex system, which consists of 1.3 million components including 72 Rubin GPUs and 36 Vera CPUs, primarily manufactured by Taiwan Semiconductor Manufacturing Co. The unveiling comes as Nvidia faces intensifying competition from Advanced Micro Devices and custom silicon from companies like Broadcom and Google, while simultaneously addressing critical energy consumption challenges in the rapidly expanding AI sector. The Vera Rubin system represents a significant leap in AI infrastructure, weighing nearly 2 tons and containing approximately 1,300 microchips compared to the 864 in the Grace Blackwell model. What distinguishes this new system is its innovative modular design, allowing superchips to slide out of one of the rack's 18 compute trays in seconds for easier maintenance and upgrades—a stark contrast to the soldered components in the Blackwell system. Nvidia sources components from over 80 suppliers across at least 20 countries including China, Vietnam, Thailand, Mexico, Israel, and the U.S., creating a complex global supply chain. The system also introduces 100% liquid cooling technology, which Nvidia claims helps data centers consume 'much less water' than traditional evaporative cooling methods.

🏷️ Themes

AI Infrastructure, Technological Innovation, Market Competition, Energy Efficiency

📚 Related People & Topics

Performance per watt

Computer energy efficiency

In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed. This rate is typically measured by performance on ...

View Profile → Wikipedia ↗
Modular design

Modular design

Design approach

Modular design, or modularity in design, is a design principle that subdivides a system into smaller parts called modules (such as modular process skids), which can be independently created, modified, replaced, or exchanged with other modules or between different systems.

View Profile → Wikipedia ↗

Liquid cooling

Cooling using a circulating liquid as a heat-exchange medium

Liquid cooling refers to cooling by means of the convection or circulation of a liquid.

View Profile → Wikipedia ↗

Entity Intersection Graph

No entity connections available yet for this article.

Original Source
In this article NVDA Follow your favorite stocks CREATE FREE ACCOUNT watch now VIDEO 13:59 13:59 First look at Vera Rubin, Nvidia’s next AI system that’s 10 times more efficient Tech Nvidia's earnings on Wednesday are expected to show booming sales of the company's current rack-scale system. But all eyes are on its next AI system, Vera Rubin, which is scheduled to roll out later this year. Vera Rubin, which is made up of 1.3 million components, will deliver 10 times more performance per watt than its predecessor, Grace Blackwell, the company claims. That's a significant development when energy consumption is one of the most critical issues facing the artificial intelligence build-out. CNBC got an exclusive first look at Vera Rubin at Nvidia's headquarters in Santa Clara, California. Nvidia says the new AI system is a complex web of parts sourced from around the world. Its core chips include 72 Rubin graphics processing units, or GPUs, and 36 Vera central processing units, or CPUs, primarily made by Taiwan Semiconductor Manufacturing Co . The other parts, from liquid cooling elements to power systems and compute trays, come from more than 80 suppliers in at least 20 countries, including China, Vietnam, Thailand, Mexico, Israel and the U.S. One big challenge the company faces is the soaring costs of memory due to a global shortage from all the AI-driven demand. Dion Harris, Nvidia's AI infrastructure head, said in an interview that the company has been giving suppliers "very detailed forecasts." "We're aligning to make sure that everything we're shipping will be met by our supply chain," he said. "We're in good shape." It's a critical moment for Nvidia, which dominates the market for AI processors but faces intensifying competition from Advanced Micro Devices as well as custom silicon from Broadcom and Google's homegrown tensor processing units. Nvidia has plans to manufacture up to $500 billion of AI infrastructure in the U.S. through 2029, including making Blackwell...
Read full article at source

Source

cnbc.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine