A new neuromorphic approach could make future robots smarter

Scientists have harnessed neuromorphic computing to maintain robots that learn about new objects after they have been implemented.

For the uninitiated, neuromorphic computing replicates the neural structure of the human brain to create algorithms that can deal with the uncertainties of the natural world.

Intel Labs has developed one of the most remarkable architectures in the field: Loihi neuromorphic chip.

greetings humanoids

Sign up for our newsletter now to get a weekly roundup of our favorite AI stories delivered to your inbox.

Loihi is made up of around 130,000 artificial neurons, which send information to each other through a “spiking” neural network (SNN). The chips had already powered a variety of systemsfrom an intelligent artificial skin to an electronic “nose” that recognizes the odors emitted by explosives.

Intel Labs introduced another application this week. The investigative unit joined with the Italian Institute of Technology and the Technical University of Munich to implement Loihi in a new continuous learning approach for robotics.

interactive learning

The method targets systems that interact with environments without constraints, such as future robotic assistants for healthcare and manufacturing.

Existing deep neural networks can have problems with object learning in these scenarios, as they require extensive and well-prepared training data, and careful training on the new objects they encounter. The new neuromorphic approach aims to overcome these limitations.

The researchers first implemented a SNN in Loihi. this architecture localizes learning to a single layer of plastic synapses. It also accounts for different views of objects by adding new neurons on demand.

As a result, the learning process develops autonomously while interacting with the user.

neuromorphic simulations

The team tested their approach in a simulated 3D environment. In this configuration, the robot actively detects objects by moving an event-based camera that works as its eyes.

The camera sensor “sees” objects in a way inspired by tiny fixation eye movements called “microsaccades.” If the object you see is new, the SNN representation is learned or updated. If the object is known, the network recognizes it and provides feedback to the user.

In a simulated configuration, the robot actively detects objects by moving its eyes (event-based camera or dynamic vision sensor), generating