OpenAI looking beyond Nvidia’s expensive GPUs as it starts to use Google’s TPU AI silicon – but will it be as easy as swapping chips?


  • OpenAI adds Google TPUs to reduce dependence on Nvidia GPUs
  • TPU adoption highlights OpenAI’s push to diversify compute options
  • Google Cloud wins OpenAI as customer despite competitive dynamics

OpenAI has reportedly begun using Google’s tensor processing units (TPUs) to power ChatGPT and other products.

A report from Reuters, which cites a source familiar with the move, notes this is OpenAI’s first major shift away from Nvidia hardware, which has so far formed the backbone of OpenAI’s compute stack.

Google is leasing TPUs through its cloud platform, adding OpenAI to a growing list of external customers which includes Apple, Anthropic, and Safe Superintelligence.

Not abandoning Nvidia

While the chips being rented are not Google’s most advanced TPU models, the agreement reflects OpenAI’s efforts to lower inference costs and diversify beyond both Nvidia and Microsoft Azure.

The decision comes as inference workloads grow alongside ChatGPT usage, now serving over 100 million active users daily.

That demand represents a substantial share of OpenAI’s estimated $40 billion annual compute budget.

Google’s v6e “Trillium” TPUs are built for steady-state inference and offer high throughput with lower operational costs compared to top-end GPUs.

Although Google declined to comment and OpenAI did not immediately respond to Reuters, the arrangement suggests a deepening of infrastructure options.

OpenAI continues to rely on Microsoft-backed Azure for most of its deployment (Microsoft is the company’s biggest investor by some way), but supply issues and pricing pressures around GPUs have exposed the risks of depending on a single vendor.

Bringing Google into the mix not only improves OpenAI’s ability to scale compute, it also aligns with a broader industry trend toward mixing hardware sources for flexibility and pricing leverage.

There’s no suggestion that OpenAI is considering abandoning Nvidia altogether, but incorporating Google’s TPUs adds more control over cost and availability.

The extent to which OpenAI can integrate this hardware into its stack remains to be seen, especially given the software ecosystem’s long-standing reliance on CUDA and Nvidia tooling.

You might also like


Source

OpenAI adds Google TPUs to reduce dependence on Nvidia GPUs TPU adoption highlights OpenAI’s push to diversify compute options Google Cloud wins OpenAI as customer despite competitive dynamics OpenAI has reportedly begun using Google’s tensor processing units (TPUs) to power ChatGPT and other products. A report from Reuters, which cites…

Leave a Reply

Your email address will not be published. Required fields are marked *