Meta’s new AI card is one step to reduce its reliance on Nvidia’s GPUs — despite spending billions on H100 and A100, Facebook’s parent firm sees a clear path to an RTX-free future
Meta recently unveiled details on the company’s AI training infrastructure, revealing that it currently relies on almost 50,000 Nvidia H100 GPUs to train its open source Llama 3 LLM.
Like a lot of major tech firms involved in AI, Meta wants to reduce its reliance on Nvidia’s hardware and has taken another step in that direction.
Meta already has its own AI inference accelerator, Meta Training and Inference Accelerator (MTIA), which is tailored for the social media giant’s in-house AI workloads, especially those improving experiences across its various products. The company has now shared insights about its second-generation MTIA, which significantly improves upon its predecessor.
Software stack
This revamped version of MTIA, which can handle inference but not training, doubles the compute and memory bandwidth of the past solution, maintaining the close tie-in with Meta’s workloads. It is designed to efficiently serve ranking and recommendation models that deliver suggestions to users. The new chip architecture aims to provide a balanced mix of compute power, memory bandwidth, and memory capacity to meet the unique needs of these models. The architecture enhances SRAM capability, enabling high performance even with reduced batch sizes.
The latest Accelerator consists of an 8×8 grid of processing elements (PEs) offering a dense compute performance 3.5 times greater and a sparse compute performance that’s reportedly seven times better than MTIA v1. The advancement stems from optimizations in the new architecture around the pipelining of sparse compute, as well as how data is fed into the PEs. Key features include triple the size of local storage, double the on-chip SRAM and a 3.5X increase in its bandwidth, and double the LPDDR5 capacity.
Along with the hardware, Meta is also focusing on co-designing the software stack with the silicon to synergize an optimal overall inference solution. The company says it has developed a robust, rack-based system that accommodates up to 72 accelerators, designed to clock the chip at 1.35GHz and run it at 90W.
Among other developments, Meta says it has also upgraded the fabric between accelerators, increasing the bandwidth and system scalability significantly. The Triton-MTIA, a backend compiler built to generate high-performance code for MTIA hardware, further optimizes the software stack.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The new MTIA won’t have a massive impact on Meta’s roadmap towards a future less reliant on Nvidia’s GPUs, but it is another step in that direction.
More from TechRadar Pro
Meta recently unveiled details on the company’s AI training infrastructure, revealing that it currently relies on almost 50,000 Nvidia H100 GPUs to train its open source Llama 3 LLM. Like a lot of major tech firms involved in AI, Meta wants to reduce its reliance on Nvidia’s hardware and has…
Recent Posts
- Google is giving Android users hands-free navigation and a way to talk with emojis
- Quordle today – hints and answers for Friday, May 17 (game #844)
- NYT Strands today — hints, answers and spangram for Friday, May 17 (game #75)
- iMessage is having some issues today
- Google’s Gemini AI plan for schools promises extra data protection and privacy
Archives
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- December 2011