OpenAI looking beyond Nvidia’s expensive GPUs as it starts to use Google’s TPU AI silicon – but will it be as easy as swapping chips?


- OpenAI adds Google TPUs to reduce dependence on Nvidia GPUs
- TPU adoption highlights OpenAI’s push to diversify compute options
- Google Cloud wins OpenAI as customer despite competitive dynamics
OpenAI has reportedly begun using Google’s tensor processing units (TPUs) to power ChatGPT and other products.
A report from Reuters, which cites a source familiar with the move, notes this is OpenAI’s first major shift away from Nvidia hardware, which has so far formed the backbone of OpenAI’s compute stack.
Google is leasing TPUs through its cloud platform, adding OpenAI to a growing list of external customers which includes Apple, Anthropic, and Safe Superintelligence.
Not abandoning Nvidia
While the chips being rented are not Google’s most advanced TPU models, the agreement reflects OpenAI’s efforts to lower inference costs and diversify beyond both Nvidia and Microsoft Azure.
The decision comes as inference workloads grow alongside ChatGPT usage, now serving over 100 million active users daily.
That demand represents a substantial share of OpenAI’s estimated $40 billion annual compute budget.
Google’s v6e “Trillium” TPUs are built for steady-state inference and offer high throughput with lower operational costs compared to top-end GPUs.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Although Google declined to comment and OpenAI did not immediately respond to Reuters, the arrangement suggests a deepening of infrastructure options.
OpenAI continues to rely on Microsoft-backed Azure for most of its deployment (Microsoft is the company’s biggest investor by some way), but supply issues and pricing pressures around GPUs have exposed the risks of depending on a single vendor.
Bringing Google into the mix not only improves OpenAI’s ability to scale compute, it also aligns with a broader industry trend toward mixing hardware sources for flexibility and pricing leverage.
There’s no suggestion that OpenAI is considering abandoning Nvidia altogether, but incorporating Google’s TPUs adds more control over cost and availability.
The extent to which OpenAI can integrate this hardware into its stack remains to be seen, especially given the software ecosystem’s long-standing reliance on CUDA and Nvidia tooling.
You might also like
OpenAI adds Google TPUs to reduce dependence on Nvidia GPUs TPU adoption highlights OpenAI’s push to diversify compute options Google Cloud wins OpenAI as customer despite competitive dynamics OpenAI has reportedly begun using Google’s tensor processing units (TPUs) to power ChatGPT and other products. A report from Reuters, which cites…
Recent Posts
- Chinese vendor launches liquid-cooled mini PC powered by AMD’s most powerful AI processor, with a built-in 400W PSU
- Here are the letters that convinced Google and Apple to keep TikTok online
- The best early Prime Day deal? A free 6-month Amazon Prime membership for students
- Meet Soham Parekh, the engineer burning through tech by working at three to four startups simultaneously
- Neither AI nor E Ink can make touchscreen trackpads a good idea
Archives
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021