Nvidia Earnings Review & Palo Alto Earnings Snapshot
Photo by Mariia Shalabaieva / Unsplash

Nvidia Earnings Review & Palo Alto Earnings Snapshot

Table of Contents

Earnings reviews from this season:

1. Palo Alto (PANW) – Brief Earnings Snapshot

I will publish the full review on Saturday. For now, a very brief snapshot before the detailed Nvidia review.

a. Results

  • Slightly beat guidance & estimates for revenue, remaining performance obligations (RPO) and next-gen security annual recurring revenue (ARR).
  • Slightly beat gross profit margin (GPM) estimates.
  • Beat EBIT estimates by 4%. Missed GAAP EBIT estimates by 8.3%.
  • Beat $0.90 EPS estimate by $0.03 & beat guidance by $0.04.

b. Balance Sheet

  • $4.2B in cash & equivalents.
  • $6B in LT investments.
  • No debt. Stock compensation rose 23% Y/Y.
  • Share count was flat Y/Y.

c. Guidance & Valuation

  • Reiterated annual next-gen security ARR and RPO guides, which both slightly missed estimates.
  • Slightly raised annual revenue guide, meeting estimates.
  • Raised annual 29.4% EBIT margin guidance to 29.7%, beating 29.5% margin estimates.
  • Raised $3.80 EPS guidance to $3.85, beating estimates by $0.05.
  • Reiterated 38.5% FCF margin guide, missing 38.7% margin estimates.

PANW trades for 52x forward EPS. EPS is expected to grow by 13% this year and by 14% next year. It trades for 33x forward FCF as well. FCF is expected to grow by 17% this year and by 14% next year.

2. Nvidia (NVDA) – Detailed Earnings Review.

a. Nvidia 101

Nvidia designs semiconductors for data center, gaming and other use cases. It’s considered the technology leader in chips meant for accelerated compute and generative AI (GenAI) use cases. While that’s where it specializes, it does a lot more. Its toolkit includes chips, servers, switches, networking, AI models and cutting-edge software to optimize the hardware it provides. Owning more pieces of GenAI infrastructure means opportunity for more software-based product optimization.

The following items are important acronyms and definitions to know for this company:

Chips:

  • GPU: Graphics Processing Unit. This is an electronic circuit used to process visual information and data.
  • CPU: Central Processing Unit. This is a different type of electronic circuit that carries out tasks/assignments and data processing from applications. Teachers will often call this the “computer’s brain.”
  • Blackwell: Nvidia’s modern GPU architecture designed for accelerated compute and GenAI. It replaces Hopper. Rubin is the next platform after Blackwell. Then Feynmann.
  • Grace: Nvidia’s new CPU architecture that is designed for accelerated compute and GenAI.
  • GB300: Its Grace Blackwell Superchip with Nvidia latest “Blackwell Ultra” GPUs and ARM Holdings tech.

Connectivity:

  • NVLink Switches: Designed to aggregate and connect (or “scale-up”) Nvidia GPUs within one or a couple of server racks. This creates a sort of “mega-GPU.” GPU connections power greater efficiency, performance and computing scale (so cost advantages). 
    • The newest system allows for 576 total GPUs to be connected.
  • InfiniBand: Standardized interconnectivity tech providing an ultra-low latency computing network. This can connect larger batches of server racks for more scalability (or “scale-out”).
  • Nvidia Spectrum X: Similar to InfiniBand functionality and performance but Ethernet-based.
    • Ethernet is vital for connecting larger compute clusters.
  • All 3 of these products are driving strong growth in this budding segment.

NVLink Fusion allows companies to build “semi-custom” AI infrastructure with Nvidia and its integration ecosystem. GPUs are general-purpose in nature. They’re not granularly designed for every single niche use case like an Application-Specific Integrated Circuit (ASIC). This can help Nvidia capture more of that demand by pairing with Marvell and a few other partners to more easily emulate purpose-built hardware.

The Nvidia GB300 NVLink72 is its rack-scale computing system. Rack scale means the entire server rack powers computation rather than a single server. Because this includes Blackwell chips and NVLink switches, it’s partially in the compute bucket and partially in networking. This aggregated product is the core revenue driver right now.

Software, Models & More:

  • NeMo: Guided step-functions to build granular GenAI models for client-specific needs. It’s a standardized environment for model creation.
  • CUDA: Nvidia-designed computing and program-writing platform purpose-built for Nvidia GPU optimizations. CUDA helps power things like Nvidia Inference Microservices (NIM), which guide the deployment of GenAI models (after NeMo helps build them).
    • NIMs help “run CUDA everywhere” — in both on-premise and hosted cloud environments.
  • GenAI Model Training: One of two key layers to model development. This seasons a model by feeding it specific data.
  • GenAI Model Inference: The second key layer to model development. This pushes trained models to create new insights and uncover new, related patterns. It connects data dots that we didn’t realize were related. Training comes first. Inference comes second… third… fourth etc.
  • Omniverse is its digital twin-building platform. This allows companies to deeply test decisions and reactions in a zero-stakes, simulated environment, turbocharging experimentation and progress.
  • Cosmos is its suite of world foundation models and apps for physical AI. It’s grounded in laws of physics and everything needed to effectively understand the physical world.
  • Thor is the name of its platform for robotics and physical AI.

DGX: Nvidia’s full-stack platform combining its chipsets and software services.

b. Key Points