Other Recent Earnings Reviews:
- Rubrik & Oracle
- Snowflake
- MongoDB & The Trade Desk
- Sea Limited & On Holding
- Nvidia
- CrowdStrike & Broadcom
- Axon & Mercado Libre
- Coupang & Zscaler
- Nu & Cava
- Hims
- 20 more earnings reviews from this season (search by ticker).
Other recent content:
Max Readers -- Access the Discord room to ask me questions, enjoy more sell-side research and collaborate with a large community of seasoned investors here. It's part of your subscription! Please ask if you need help getting in.
Table of Contents
- 1. Meta (META) – Model Delay & CFO Interview
- 2. Zscaler (ZS) – Morgan Stanley Investor Conference Notes
- 3. SoFi (SOFI) – More on Stablecoins
- 4. Mercado Libre (MELI) – Noisy Week
- 5. Axon (AXON) – CFO/COO Brittany Bagley Interviews with Morgan Stanley
- 6. Alphabet (GOOGL) – CFO Anat Ashkenazi Interviews with Morgan Stanley & More
- 7. Uber (UBER) – CFO Balaji Krishnamurthy’s Morgan Stanley Interview & Amazon
- 8. ServiceNow (NOW) – CEO Bill McDermott’s Interviews with Citizens JMP and Morgan Stanley
- 9. Headlines & Macro
1. Meta (META) – Model Delay & CFO Interview
a. Model Delay
There are reports that Meta is delaying their Frontier Model release (called "Avocado") from March to May. This was due to performance not yet matching the other world-class models delivered by Google and OpenAI. Here's how I see things. I'd love Meta to be the creator of best-in-class models, as that allows them to control their own destiny and completely avoid future issues that stem from reliance on other partners. They would fully own all of the engagement and monetization optimizations we hear about every quarter, and wouldn't have to compensate any partner for enabling them. They also wouldn't have to rely on any partner playing nice as the landscape evolves. We all remember the cross-app data sharing restrictions Apple implemented in 2022, and how long it took for this company to plug the signal gap. About two years. Plan A is them building a frontier model... but plan B is certainly far from the end of the world. It's a great fallback.
If they fail? There are many leading models to choose from. Apple dominates the consumer app ecosystem and is a gatekeeper to most of the data that flows throughout it. When they stopped sharing information, there wasn't an alternative to fill the void. There are a handful of frontier model vendors that have world-class products and are constantly leapfrogging each other. META also has a mountain of its own proprietary data to customize these models and ensure they're still delivering unique engagement and monetization edges over the competition (what advertisers care about). If one leader decides to restrict access for whatever reason... there are close substitutes in this case and META will be far better off than it was 4 years ago when the Apple partner reliance risk played out.
The worst-case scenario here is them waving the white flag and using partner models. And? That would mean lower CapEx and a boost in free cash flow generation (and EBIT over time as depreciation growth slows). Again, that is plan B. I would prefer plan A. But plan B is comforting. And a model launch delay has zero impact on the expectations for the next couple of years of probable compounding.
b. CFO Interview
Data Center Buildouts & CapEx:
Meta is getting creative, accelerating installation of more compute. It's using aluminum frame tents to install capacity in weeks, rather than waiting up to 24 months for large permanent data centers to be constructed. This does run into problems with cooling, which forces Meta to throttle power amid heat waves, but it does allow them to start using purchased equipment for training and inference more quickly.
The company thinks it has CapEx forecasting for its core business needs pretty much down to science. That's not exactly a daunting task when the company has more things to do with the core business than it can currently budget for, and can allocate capex for other use cases to this one very easily. Still good to hear. It also thinks it has a really good feel for training needs and associated CapEx dollars, but inference is more of an "art" at this point. One of the things Li is most concerned about is underestimating needs for inference. Meta continues to view the risk of underspending as bigger than overspending, and will execute its currently aggressive plans. Considering their ability to reallocate the compute in many different productive ways, I agree.
AI Driving Core Business Improvements:
CFO Susan Li reviewed the concrete monetization and engagement improvements AI investments consistently delivered over the last several quarters. They still feel like they're in the very early innings of AI improving the core business, and continue to believe there is so much more compute their family of apps' monetization engine could utilize to improve time spent and revenue per user across its main products. The company is still transitioning from traditional machine learning systems that mainly consider past customer behavior to an agentic, LLM-powered system that understands context and dynamic intent. They're going very slowly with this process, and really have only implemented LLM-based ad matching on threads, their smallest app.
They also have not yet built a state-of-the-art, world-class model to integrate across these ranking systems. They are behind Google, Anthropic, and OpenAI, and just delayed the release of their next model by at least two months because of that. To me, this either means Meta figures it out and adds a new layer of progress to enjoy across its core business, or, as I described above, they wave the white flag, use Google, and enjoy those same benefits while paying that partner a small piece of the added value.
Furthermore, Meta is still in the process of unifying its monetization and engagement optimization systems into one central brain. I've been talking about it over the last couple quarters. As its advertisements get more targeted (and they're getting scary good), they stop feeling like advertisements. Instead, they offer interesting and inspiring things to explore and buy. As this process unfolds, understanding reactions to a friend's post or merchant's t-shirt becomes a more similar concept. Work can be unified to create a more holistic understanding of behavior. All I can say is that it’s working well on me. I purchase more things from those apps than I ever have before and mind the advertisements less than ever before as well.
On The Perception That Meta is Behind AI Competitors:
Li acknowledged that Meta does not have a (world class) model in the market right now. At the same time, she also talked about having world-class distribution, with Meta AI having 1 billion+ users as of now. They've been able to deliver great product-market fit without a world-class model, and think they'll be able to do that going forward if they can't get there as well. This isn't to say that they're not optimistic about their future ability to deliver fantastic LLMs. It's more to say that having the most powerful model might not be the most important thing for Meta's future profitable growth. Even with great, but not best-in-class, models, they think their ability to personalize agentic experiences for customers based on their massive base of user history will be a differentiator in the AI race.
More Notes on AI:
- They are committed to offering open source models, but will continue to close some of its models going forward.
- They are confident that Manus-based business agents represent a promising future opportunity.
- They continue to use their own chips for some traditional machine learning, ranking, and recommendation workloads, but still plan to use these products in the future for model training.