SP
BravenNow
Nvidia's GTC will mark an AI chip pivot. Here's why the CPU is taking center stage
| USA | general | ✓ Verified - cnbc.com

Nvidia's GTC will mark an AI chip pivot. Here's why the CPU is taking center stage

#Nvidia #GTC #AI chip #CPU #pivot #hardware #conference

📌 Key Takeaways

  • Nvidia's GTC conference signals a strategic shift in AI chip focus.
  • The CPU (Central Processing Unit) is becoming a central component in AI development.
  • This pivot reflects evolving AI workloads and hardware demands.
  • The move may influence future chip design and industry competition.

📖 Full Retelling

Nvidia and AMD are seeing huge demand for CPUs and Jensen Huang is poised to unveil details for processors specialized for agentic AI at the GTC conference.

🏷️ Themes

AI Hardware, Industry Shift

📚 Related People & Topics

GTC

Topics referred to by the same term

GTC may refer to:

View Profile → Wikipedia ↗
Nvidia

Nvidia

American multinational technology company

Nvidia Corporation ( en-VID-ee-ə) is an American technology company headquartered in Santa Clara, California. Founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, it develops graphics processing units (GPUs), systems on chips (SoCs), and application programming interfaces (APIs) for...

View Profile → Wikipedia ↗
Central processing unit

Central processing unit

Central computer component that executes instructions

A central processing unit (CPU), also known as a central processor, main processor, or simply processor, is the primary processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for GTC:

🏢 Nvidia 3 shared
🌐 Artificial intelligence 2 shared
👤 Jim Cramer 1 shared
🌐 Graphics processing unit 1 shared
🏢 Intel 1 shared
View full profile

Mentioned Entities

GTC

Topics referred to by the same term

Nvidia

Nvidia

American multinational technology company

Central processing unit

Central processing unit

Central computer component that executes instructions

Deep Analysis

Why It Matters

This news matters because Nvidia's strategic shift toward CPU-centric AI chips signals a fundamental change in computing architecture that could reshape the entire semiconductor industry. It affects AI developers, cloud service providers, and enterprise customers who rely on Nvidia's hardware for training and deploying AI models. The move could challenge competitors like Intel and AMD while potentially creating new performance benchmarks for AI workloads. This pivot also has implications for data center efficiency and the economics of large-scale AI deployment.

Context & Background

  • Nvidia has dominated the AI chip market with its GPU architecture, particularly through products like the H100 and A100 that power most large language model training
  • Traditional computing has relied on CPUs for general processing with GPUs as accelerators for parallel workloads like graphics and AI
  • The AI boom has created unprecedented demand for specialized hardware, with Nvidia's market capitalization exceeding $2 trillion in early 2024
  • Competitors like AMD, Intel, and custom silicon from cloud providers (Google TPU, AWS Trainium) have been challenging Nvidia's dominance
  • CPU technology has evolved with new architectures like ARM-based designs gaining traction in data centers
  • Previous GTC conferences have typically focused on GPU advancements and software ecosystems like CUDA

What Happens Next

At the upcoming GTC conference (likely March 2024), Nvidia will unveil new CPU-focused AI chip architectures and potentially announce partnerships with major cloud providers. Following the announcement, we can expect competitive responses from AMD and Intel within 6-12 months, along with detailed performance benchmarks from independent testing organizations. The industry will watch for adoption rates among major AI companies and cloud providers throughout 2024-2025.

Frequently Asked Questions

Why would Nvidia pivot from GPUs to CPUs for AI?

Nvidia is likely responding to evolving AI workloads that require more balanced computing between general processing and specialized acceleration. As AI models become more complex and diverse, pure GPU architectures may face limitations in handling certain types of operations efficiently.

How will this affect AI developers and companies?

Developers may need to optimize their AI workloads for new hybrid architectures, potentially requiring code adjustments. Companies investing in AI infrastructure will need to evaluate whether to adopt the new CPU-focused chips or stick with existing GPU solutions based on their specific use cases.

What does this mean for Nvidia's competitors?

This move puts pressure on Intel and AMD to accelerate their own AI chip roadmaps while potentially creating opportunities for ARM-based chip designers. Cloud providers with custom silicon may need to reassess their competitive positioning against Nvidia's new offerings.

Will this make AI computing cheaper?

Initially, new architectures typically come at premium prices, but increased competition and architectural efficiency could drive down costs over time. The long-term effect depends on whether the new designs significantly improve performance-per-dollar for common AI workloads.

How does this relate to Nvidia's software ecosystem?

Nvidia will likely extend its CUDA platform and software tools to support the new CPU architectures, maintaining its integrated hardware-software advantage. Developers will watch for backward compatibility and migration tools for existing AI applications.

}
Original Source
In this article NVDA Follow your favorite stocks CREATE FREE ACCOUNT Nvidia showed CNBC its latest Vera CPU at its Santa Clara, California, headquarters on Feb. 13, 2026. Marc Ganley | CNBC Nvidia 's graphics processing units have been the hottest-selling chips for years, but the sudden advent of agentic artificial intelligence has brought on a renaissance for its more modest host chip, the central processing unit. Now, Nvidia is poised to unveil new details about its agentic-optimized CPU s at its annual GTC conference that kicks off on Monday, with a CPU-only rack likely to appear on the showroom floor. "CPUs are becoming the bottleneck in terms of growing out this AI and agentic workflow," Dion Harris, Nvidia's head of AI infrastructure, told CNBC this week, calling it an "exciting opportunity." The chip giant announced its first data center CPU, Grace , in 2021, and the next generation, Vera , is now in production. The CPUs are typically deployed alongside Nvidia's famous Hopper, Blackwell or Rubin GPU s in full rack-scale systems. Exploding demand for GPUs has turned Nvidia into a household name and the most valuable publicly traded company in the world, with a $4.4 trillion market cap. Its broader chip strategy took a major turn in February, when Nvidia struck a multiyear deal with Meta that included the first large-scale deployment of Grace CPUs on their own, with plans to deploy Vera in 2027. Thousands of standalone Nvidia CPUs are also helping power supercomputers at the Texas Advanced Computing Center and Los Alamos National Lab, Nvidia told CNBC. Bank of America predicts the CPU market could more than double, from $27 billion in 2025 to $60 billion by 2030. In the latest quarter alone, Nvidia generated data center revenue of over $62 billion, up 75% from a year earlier. The CPU resurgence is driven by a fundamental change in compute needs, as mass AI adoption shifts from call-and-answer chatbots to task-oriented agentic apps. While GPUs are ideal for trai...
Read full article at source

Source

cnbc.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine