Nvidia rival claims DeepSeek world record as it delivers industry-first performance with 95% fewer chips

- SambaNova runs DeepSeek-R1 at 198 tokens/sec using 16 custom chips
- The SN40L RDU chip is reportedly 3X faster, 5X more efficient than GPUs
- 5X speed boost is promised soon, with 100X capacity by year-end on cloud
Chinese AI upstart DeepSeek has very quickly made a name for itself in 2025, with its R1 large-scale open source language model, built for advanced reasoning tasks, showing performance on par with the industry’s top models, while being more cost-efficient.
SambaNova Systems, an AI startup founded in 2017 by experts from Sun/Oracle and Stanford University, has now announced what it claims is the world’s fastest deployment of the DeepSeek-R1 671B LLM to date.
The company says it has achieved 198 tokens per second, per user, using just 16 custom-built chips, replacing the 40 racks of 320 Nvidia GPUs that would typically be required.
Independently verified
“Powered by the SN40L RDU chip, SambaNova is the fastest platform running DeepSeek,” said Rodrigo Liang, CEO and co-founder of SambaNova. “This will increase to 5X faster than the latest GPU speed on a single rack – and by year-end, we will offer 100X capacity for DeepSeek-R1.”
While Nvidia’s GPUs have traditionally powered large AI workloads, SambaNova argues that its reconfigurable dataflow architecture offers a more efficient solution. The company claims its hardware delivers three times the speed and five times the efficiency of leading GPUs while maintaining the full reasoning power of DeepSeek-R1.
“DeepSeek-R1 is one of the most advanced frontier AI models available, but its full potential has been limited by the inefficiency of GPUs,” said Liang. “That changes today. We’re bringing the next major breakthrough – collapsing inference costs and reducing hardware requirements from 40 racks to just one – to offer DeepSeek-R1 at the fastest speeds, efficiently.”
George Cameron, co-founder of AI evaluating firm Artificial Analysis, said his company had “independently benchmarked SambaNova’s cloud deployment of the full 671 billion parameter DeepSeek-R1 Mixture of Experts model at over 195 output tokens/s, the fastest output speed we have ever measured for DeepSeek-R1. High output speeds are particularly important for reasoning models, as these models use reasoning output tokens to improve the quality of their responses. SambaNova’s high output speeds will support the use of reasoning models in latency-sensitive use cases.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
DeepSeek-R1 671B is now available on SambaNova Cloud, with API access offered to select users. The company is scaling capacity rapidly, and says it hopes to reach 20,000 tokens per second of total rack throughput “in the near future”.
You might also like
SambaNova runs DeepSeek-R1 at 198 tokens/sec using 16 custom chips The SN40L RDU chip is reportedly 3X faster, 5X more efficient than GPUs 5X speed boost is promised soon, with 100X capacity by year-end on cloud Chinese AI upstart DeepSeek has very quickly made a name for itself in 2025,…
Recent Posts
- This surprisingly simple way to hide hardware security keys in mainstream flash memory could pave the way for ultra-secure storage very soon
- Quordle hints and answers for Sunday, July 6 (game #1259)
- NYT Connections hints and answers for Sunday, July 6 (game #756)
- NYT Strands hints and answers for Sunday, July 6 (game #490)
- The 55 Best Deals From REI’s July 4 Outdoor Gear Sale (2025)
Archives
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022