Deep Render raises £1.6M for image compression tech that mimics ‘neural processes of the human eye’

Deep Render, a London startup and spin-out of Imperial College that is applying machine learning to image compression, has raised £1.6 million in seed funding. Leading the round is Pentech, with participation from Speedinvest.
Founded in mid-2017 by Arsalan Zafar and Chri Besenbruch, who met while studying Computer Science at Imperial College London, Deep Render wants to help solve the data consumption problem that is seeing internet connections choke, especially during peak periods exacerbated by the current lockdown happening in many countries.
Specifically, the startup is taking what it claims is an entirely new approach to image compression, noting that image and video data comprises more than 80% of internet traffic, driven by video-on-demand and live streaming.
“Our ‘Biological Compression’ technology rebuilds media compression from scratch by using the advances of the machine learning revolution and by mimicking the neural processes of the human eye,” explains Deep Render co-founder and CEO Chri Besenbruch.
“Our secret sauce, so to speak, is in the way the data is compressed and sent across the network. The traditional technology relies on various modules each connected to each other – but which don’t actually ‘talk’ to each other. An image is optimised for module one before moving to module two, and it’s then optimised for module two and so on. This not only causes delays, it can cause losses in data which can ultimately reduce the quality and accuracy of the resulting image. Plus, if one stage of optimisation doesn’t work, the other modules don’t know about it so can’t correct any mistakes”.

Deep Render team
To remedy this, Besenbruch says Deep Render’s image compression technology replaces all of these individual components with one very large component that talks across its entire domain. This means that each step of compression logic is connected to the others in what’s known as an “end-to-end” training method.
“What’s more, Deep Render trains its machine learning platform with the end goal in mind,” adds Besenbruch. “This has the benefit of both boosting the efficiency and accuracy of the linear functions and extending the software’s capability to model and perform non-linear functions. Think of it as a line and curve. An image, by its nature, has a lot of curvature from changes in tone, light, brightness and colour. By expanding the compression software’s ability to consider each of these curves means it’s also able to tell which images are more visually pleasing. As humans, we do this intuitively. We know when colour is a little off, or the landscape doesn’t look quite right. We don’t even realise we do this most of the time, but it plays a major role in how we assess images and videos”.
As a proof-of-concept, Deep Render carried out a fairly large-scale Amazon MTurk study, comprising of 5,000 participants, to test its image compression algorithm against BPG (a market standard for image compression, and part of the video compression standard H.265). When asked to compare perceptual quality over the CLIC-Vision dataset, over 95% of participants rated its images more visually pleasing, with Deep Render images being just half the file size.
“Our technological breakthrough represents the foundation for a new class of compression methods,” claims the Deep Render co-founder.
Asked to name direct competitors, Besenbruch says a past-competitor was Magic Pony, the image compression company bought by Twitter for a reported $150 million a year after being founded.
“Magic Pony was also looking at deep learning for solving the challenges of image and video compression,” he explains. “However, Magic Pony looked at improving the traditional compression pipeline via post and pre-processing steps using AI, and thus was ultimately still limited by its restrictions. Deep Render does not want to ‘improve’ the traditional compression pipeline; we are out to destroy it and rebuild it from its ashes”.
To that, Besenbruch says currently the only similar competitors to Deep Render are WaveOne based in Silicon Valley, and TuCodec based in Shanghai. “Deep Render is the European answer to the war about the future of compression technology. All three companies incorporated roughly at the same time,” he adds.
Deep Render, a London startup and spin-out of Imperial College that is applying machine learning to image compression, has raised £1.6 million in seed funding. Leading the round is Pentech, with participation from Speedinvest. Founded in mid-2017 by Arsalan Zafar and Chri Besenbruch, who met while studying Computer Science at…
Recent Posts
- Elon Musk says Grok 2 is going open source as he rolls out Grok 3 for Premium+ X subscribers only
- FTC Chair praises Justice Thomas as ‘the most important judge of the last 100 years’ for Black History Month
- HP acquires Humane AI assets and the AI pin will suffer a humane death
- HP acquires Humane AI assets and the AI pin may suffer a humane death
- HP acquires Humane Ai and gives the AI pin a humane death
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010