Nvidia Earnings Review
Photo by Mariia Shalabaieva / Unsplash

Nvidia Earnings Review

Table of Contents

In case you missed it:

a. Nvidia 101

Nvidia designs semiconductors for data center, gaming and other use cases. It’s considered the technology leader in chips meant for accelerated compute and generative AI (GenAI) use cases. While that’s where it specializes, it does a lot more. Its toolkit includes chips, servers, switches, networking, AI models and cutting-edge software to optimize the hardware it provides. Owning more pieces of GenAI infrastructure means opportunity for more software-based product optimization.

The following items are important acronyms and definitions to know for this company:

Chips:

  • GPU: Graphics Processing Unit. This is an electronic circuit used to process visual information and data.
  • CPU: Central Processing Unit. This is a different type of electronic circuit that carries out tasks/assignments and data processing from applications. Teachers will often call this the “computer’s brain.”
  • Blackwell: Nvidia’s modern GPU architecture designed for accelerated compute and GenAI. It replaces Hopper. Rubin is the next platform after Blackwell. Then Feynman.
  • Grace: Nvidia’s new CPU architecture that is designed for accelerated compute and GenAI.
  • GB300: Its Grace Blackwell Superchip with Nvidia's latest “Blackwell Ultra” GPUs and ARM Holdings tech.

Connectivity:

  • NVLink Switches: Designed to aggregate and connect (or “scale-up”) Nvidia GPUs within one or a couple of server racks. This creates a sort of “mega-GPU.” GPU connections power greater efficiency, performance and computing scale (so cost advantages). 
    • The newest system allows for 576 total GPUs to be connected.
  • InfiniBand: Standardized interconnectivity tech providing an ultra-low latency computing network. This can connect larger batches of server racks for more scalability (or “scale-out”).
  • Nvidia Spectrum X: Similar to InfiniBand functionality and performance but Ethernet-based.
    • Ethernet is vital for connecting larger compute clusters.
  • All 3 of these products are driving strong growth in this budding segment.

NVLink Fusion allows companies to build “semi-custom” AI infrastructure with Nvidia and its integration ecosystem. GPUs are general-purpose in nature. They’re not granularly designed for every single niche use case like an Application-Specific Integrated Circuit (ASIC). This can help Nvidia capture more of that demand by partnering with Marvell and a few other partners to more easily emulate purpose-built hardware.

The Nvidia GB300 NVLink72 is its rack-scale computing system. Rack scale means the entire server rack powers computation rather than a single server. Because this includes Blackwell chips and NVLink switches, it’s partially in the compute bucket and partially in networking. This aggregated product is the core revenue driver right now.

Software, Models & More:

  • NeMo: Guided step-functions to build granular GenAI models for client-specific needs. It’s a standardized environment for model creation.
  • CUDA: Nvidia-designed computing and program-writing platform purpose-built for Nvidia GPU optimizations. CUDA helps power things like Nvidia Inference Microservices (NIM), which guide the deployment of GenAI models (after NeMo helps build them).
    • NIMs help “run CUDA everywhere” — in both on-premise and hosted cloud environments.
  • GenAI Model Training: One of two key layers to model development. This seasons a model by feeding it specific data.
  • GenAI Model Inference: The second key layer to model development. This pushes trained models to create new insights and uncover new, related patterns. It connects data dots that we didn’t realize were related. Training comes first. Inference comes second… third… fourth etc.
  • Omniverse is its digital twin-building platform. This enables deep testing and learning in a zero-stakes, simulated environment, turbocharging experimentation and progress.
  • Cosmos is its suite of world foundation models and apps for physical AI. It’s grounded in laws of physics and everything needed to effectively understand the physical world.
  • Thor is the name of its platform for robotics and physical AI.

DGX: Nvidia’s full-stack platform combining its chipsets and software services.

b. Key Points

  • Strong results for its data center segment.
  • Rubin is on track to ramp later this year.
  • Guidance now includes stock comp as a non-GAAP expense.
  • Expects to exceed previously communicated $500B Blackwell + Rubin revenue target.

c. demand

  • Beat revenue estimates by 2.9% and beat guidance by 4.7%. There is no revenue from China in these results.
  • Beat data center revenue estimates by 3.7%.
    • Beat Networking revenue estimates by 22%. Networking revenue rose by 263% Y/Y vs. 162% last quarter.
    • Slightly missed Compute revenue estimates.  Compute revenue rose by 58% Y/Y vs. 56% last quarter.
    • Annual networking revenue has now grown by 10x in 5 years since NVDA bought Mellanox.
  • Beat gaming revenue estimates by 6%. 
  • Beat professional visualization revenue estimates by 71%. This was the first quarter in which the segment crossed $1B in revenue. 
  • Missed auto revenue estimates by 6%. 

As you would expect, strength across Blackwell, NVLink, SpectrumX, CUDA, and its entire data center focused product suite powered the outperformance for the quarter. It has now delivered Blackwell capacity equal to nine gigawatts of compute for customers and has enjoyed a very strong Blackwell Ultra ramp, which is the next iteration of this GPU platform. On the networking side, scale-up (NVLink) and scale-out (mostly SpectrumX) demand were both exceedingly healthy.

d. Profits & Margins

  • Slightly beat GPM estimates & guidance.
  • Beat FCF estimates by 3%.
  • Beat EBIT estimates by 3.1% and beat guidance by 5.3%.
  • Beat $1.54 EPS estimates by $0.08. This was helped by a 15.4% tax rate vs. 17% expected. Without this help, EPS would have been roughly in line with expectations. 

GPM strength was helped by lower provisions this quarter compared to the Y/Y period. Furthermore, Blackwell continues to ramp and enjoy better economies of scale, which is diminishing the margin dilution associated with that growth. Going forward, NVDA sees GPM sustainability and pricing power amid all of the memory inflation as based mainly on its ability to keep delivering GPUs that perform better than anyone else's. Product superiority drives demand, and pricing power follows. This is the only way NVDA will be able to maintain an incredible 68% EBIT margin, which, for context, is much better than AMD's gross margin. 

GAAP and non-GAAP operating expenses rose by 45% Y/Y and 51% Y/Y, respectively. This was powered by compensation growth related to more headcount, as well as rising compute infrastructure costs via their explosive growth. 

e. Balance Sheet

  • $62.6B in cash & equivalents; $22.2B in equity securities.
  • Inventory rose by 112% Y/Y; $8.5B in debt.
  • Diluted share count fell by 1.2% Y/Y.
  • Days of sales outstanding (DSO) fell from 53 to 51 Q/Q. This was based on collection timing. 

The inventory growth is certainly notable. This is in response to what NVIDIA views as strong demand signals and heightened demand visibility that is allowing NVDA to store more inventory confidently. This confidence stems from discussions with customers and rising purchase commitments, but also encouraging utilization rates for older models, pointing to long depreciation schedules, which should support demand for older platforms and reduce the risk of inventory waste. More on this later. They are purchasing inventory to service demand further out than they normally would, and these are the reasons why. 

f. Guidance & Valuation