Is Neuromorphic Computing The Answer For Autonomous Driving And Personal Robotics?

If you’re a fan of the latest developments in the field of technology and know about the latest trends in tech, then you’ll be aware that there’s been an awful amount of discussion about how the next major thing is likely to be. The most popular choice for many is AR glasses. (AR) glasses and others are pointing towards fully autonomous vehicles, and some are still clinging to the possibility of 5G. With the announcement of Amazon’s Astro just a few weeks ago, personal robotic devices and virtual companions too have put their hats into the competition.
While there isn’t a lot of consensus about what the next step will be, there is no doubt that whatever it happens to be, it will be controlled by, enabled, or enhanced with AI. (AI). The idea is that AI and machine learning (ML) are the future of our world appears to be a predetermined conclusion.


However, let’s take an honest evaluation of where certain technologies stand in functionality versus their initial expectations. It’s safe to conclude that the outcomes are disappointing on numerous levels. If we expand that thinking process to what AI/ML was supposed to accomplish for us in general, we’re able to draw an equally disappointing conclusion.
To be precise, there have been some remarkable improvements in many fields that AI has enabled. Advanced analysis, neural network training, and related areas (where vast amounts of data are utilized to discover patterns, learn rules, and later apply the authorities) have proven to be hugely beneficial to current AI techniques.


In the same way, when we consider an autonomous vehicle application, it is becoming increasingly evident that simply putting increasing amounts of information into algorithms that crank out ever-more refined but still flawed models for ML isn’t practical. It’s still a long way to real-world level 5 autonomic driving. And given the sheer number of deaths and accidents caused by efforts such as Tesla’s AutoPilot have forced it’s likely time to think about a different approach.


Also, while we’re just beginning to enter the personal robotics age, it is easy to see the similarities in concepts between autonomous vehicles and robotics can result in similar conceptual issues in this new area. The main problem is that there’s no way to input every possible scenario into an AI training model and generate an established answer to how to respond to every plan. The randomness of the world and surprises are just too powerful an influence.


The most needed thing is a computer that can think and develop independently and adapt its learning to new situations. Although it’s a bit crazy and possibly controversial as it may be, it’s the essence of the goal that researchers working within the area of neuroscience-based computing have set out to achieve. The idea behind it is to re-create the design and function of the most flexible computer/thinking machine we have ever seen–the human brain in digital form. Based on the fundamentals of neuroscience, neuromorphic chips try to recreate a set of neurons connected by digital synapses which send electrical signals between them, like the brains of biological organisms.


It’s a field of research in academia that has been making for quite a while now. However, it’s only been recent that it began to make significant progress and attract more recognition. Hidden in the flurry of announcements from the tech industry that were made in the past couple of weeks, Intel released the second version of their neuromorphic chips, called Loihi 2, along with an open-source software framework which they’ve named Lava.


To set realistic expectations about all this, Loihi 2 is not going to be commercially available. It’s known as research chips. And the most recent version has 1 million neurons. This is quite a distance from the 100 billion neurons found in the human brain. It’s still a remarkable, impressive project that offers 10x the speed and 15x the density of the predecessor from 2018 (it’s built on Intel’s innovative Intel four chip production technology) and increased energy efficiency. Furthermore, it gives a better (and more straightforward) way of connecting its unique design with older chips.


Intel has learned a lot from the very first Loihi project, and one of the most important lessons learned was that creating software for this completely new technology is extremely difficult. This is why an essential aspect of the company’s announcement was the launch of Lava, the open-source software framework, and a set of tools that could be used to create apps for Loihi. The company also offers tools to simulate the operation of Loihi on conventional CPUs and GPUs to let developers write software without access to the processors.


What’s fascinating about the way neuromorphic chips function is that they operate in very different ways compared to traditional CPU computing models and parallel GPU-like models of computing. They can be utilized to accomplish some of the same objectives. Also, in a faster, more efficient, and less data-intensive method, neuromorphic chips such as Loihi 2 can provide the desired results similar to what traditional AI aims for. Through a series of event-driven spikes that occur asynchronously and trigger neurons to react in a variety of ways similar to how the human brain works (vs. the structured, synchronous processing that is found of GPUs or CPUs) GPUs)–a neuromorphic device can “learn” things by itself. It’s perfect for devices that need to respond to new stimuli at a rapid pace.


These capabilities are the reason the chips are attractive to the people designing and building robotic systems, and robotics-like ones such as autonomous cars, in essence, are. It is possible to use commercially available neuromorphic chips to power the kinds of autonomous vehicles and personal robots we have in our science fiction-inspired visions.


Naturally, neuromorphic computing isn’t the only innovative way to improve the quality of technology. There’s also a lot of work occuring in the more well-known quantum computing. As with quantum computing, neural computations are very complicated and are mostly seen as research projects for corporations’ R&D labs and academic research. In contrast, quantum neuromorphic computing doesn’t require extreme physical requirements (temperatures close to absolute null) and power demands like quantum currently face. One of the appealing aspects of neuromorphic structures is that they’re built to run at low power, which makes them ideal for a range of battery-powered mobile applications (like autonomous vehicles and robotics).


However, despite the recent advances, it’s essential to keep in mind that the commercialization of neuromorphic chips is some time from being a reality. It’s not difficult to be intrigued and excited by technology with the possibility of making AI-powered devices intelligent instead of just highly trained. The difference may appear subtle; however, ultimately, it’s the type of technology that will likely be required to make the “next major things” actually happen in a way we all can visualize.

- Advertisement -
Avatar photo
Adam Collins
Adam writes about technology, business and economics. With master's degree in Economics, he's presented six papers in international conferences. As a solivagant in the constant state of fernweh, curiosity is the main weapon in his arsenal.

Latest articles

Related articles