‘Inspired by the human brain’: Intel debuts neuromorphic system that aims to mimic grey matter with a clear aim — making the machine exponentially faster and much more power efficient, just like us

Neuromorphic computing is about mimicking the human brain’s structure to deliver more efficient data processing, including faster speeds and higher accuracy, and it’s a hot topic right now. A lot of universities and tech firms are working on it, including scientists at Intel who have built the world’s largest “brain-based” computing system for Sandia National Laboratories in New Mexico.

Intel’s creation, called Hala Point, is only the size of a microwave, but boasts 1.15 billion artificial neurons. That’s a massive step up from the 50 million neuron capacity of its predecessor, Pohoiki Springs, which debuted four years ago. There’s a theme with Intel’s naming in case you were wondering – they’re locations in Hawaii.

Hala Point is ten times faster than its predecessor, 15 times denser, and with one million circuits on a single chip. Pohoiki Springs only had 128,000.

Making full use of it

Equipped with 1,152 Loihi 2 research processors (Loihi is a volcano in Hawaii), the Hala Point system will be tasked with harnessing the power of vast neuromorphic computation. “Our colleagues at Sandia have consistently applied our Loihi hardware in ways we never imagined, and we look forward to their research with Hala Point leading to breakthroughs in the scale, speed and efficiency of many impactful computing problems,” said Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs.

Since a Neuromorphic system of this scale hasn’t existed before, Sandia has been developing special algorithms to ultimately make use of the computer’s full capabilities.

“We believe this new level of experimentation – the start, we hope, of large-scale neuromorphic computing – will help create a brain-based system with unrivaled ability to process, respond to and learn from real-life data,” Sandia lead researcher Craig Vineyard said.

His colleague, fellow researcher Brad Aimone added, “One of the main differences between brain-like computing and regular computers we use today – in both our brains and in neuromorphic computing – is that the computation is spread over many neurons in parallel, rather than long processes in series that are an inescapable part of conventional computing. As a result, the more neurons we have in a neuromorphic system, the more complex a calculation we can perform. We see this in real brains. Even the smallest mammal brains have tens of millions of neurons; our brains have around 80 billion. We see it in today’s AI algorithms. Bigger is far better.”

More from TechRadar Pro


Source

Neuromorphic computing is about mimicking the human brain’s structure to deliver more efficient data processing, including faster speeds and higher accuracy, and it’s a hot topic right now. A lot of universities and tech firms are working on it, including scientists at Intel who have built the world’s largest “brain-based”…

Leave a Reply

Your email address will not be published. Required fields are marked *