OpenAI Diversifies with Google TPU, Boosting Google's AI Chip Credibility

Author's Avatar
3 days ago
Article's Main Image

Morgan Stanley has highlighted that OpenAI, backed by Microsoft (MSFT), may start utilizing Google's (GOOG, GOOGL) Tensor Processing Units (TPUs) for its artificial intelligence (AI) inference tasks. This move would signify a strong endorsement of Google's hardware technology. Historically, OpenAI has relied on NVIDIA (NVDA) chips for training and inference computations. By incorporating Google's TPUs, OpenAI is diversifying its supplier base.

Analysts, led by Brian Nowak, noted reports suggesting an impending agreement between OpenAI and Google. This deal includes OpenAI using Google Cloud's computing resources and renting Google's TPUs for inference workloads. OpenAI aims to meet its growing inference demands while managing costs effectively. However, OpenAI won't access Google's most advanced TPUs, reserved for Google's Gemini model training.

Morgan Stanley believes this agreement could accelerate Google Cloud's growth and enhance confidence in Google's AI chips. OpenAI stands out as a significant TPU client, alongside others like Apple (AAPL). Despite not accessing the latest TPUs, OpenAI's choice to partner with Google underscores Google's leadership in the ASIC ecosystem.

The decision to use Google's TPUs may also stem from limited NVIDIA GPU supply due to high demand. This move could impact Amazon's (AMZN) AWS and its Trainium chips negatively, as OpenAI operates AI workloads across major cloud providers like Google Cloud, Microsoft Azure, Oracle (ORCL), and CoreWeave (CRWV), excluding Amazon.

Disclosures

I/We may personally own shares in some of the companies mentioned above. However, those positions are not material to either the company or to my/our portfolios.