Roboticists have long struggled with the problem of control. No, not preventing Skynet from taking over and dispatching killer robots with inexplicable Austrian accents. Most roboticists would be happy if they could develop a robot that was capable of walking around the room without falling over. In fact, there’s a famous anecdote about a roboticist’s friend regularly sending him lists of tasks – dexterous manipulation, climbing up stairs, that kind of thing. They were tasks that the friend’s toddler could perform, that the roboticist’s robot could not.
A big part of the hindrance has been that there are many different robotic geometries, and different robotic designs. Humans do have a sixth sense – proprioception, our understanding of where the parts of our own body are and the appropriate amount of force to apply for a given task. But for each new robotic design, to have successful interactions with the environment, you need to redesign this whole architecture of control – the communication between the sensors and the actuators.
But this could be about to change, thanks to research from the Université libre de Bruxelles. Researchers Nithin Mathews, Anders Lyhne Christensen, Rehan O’Grady, Francesco Mondada & Marco Dorigo have developed modular robots – made up of many different units – that can merge, split, and reconfigure themselves: all while maintaining sensory control. They call them mergeable nervous system robots (MNS robots). One individual robot acts as a centralized decision-making robot, referred to as the brain units – but additional robots can autonomously join the brain unit as and when needed, to change the shape and structure of the overall system. They can heal themselves by replacing malfunctioning parts – even when a brain unit malfunctions.
The crucial point of this research is in the logic control system that the robots use. Previous systems for sensory and motor control were closely linked to the body shape and type of the robots: proprioception had to be reprogrammed for each new robotic body. The brain centrally sends out commands to the various parts of the body; but since the shape of that body is built-in, it’s not a very flexible nervous system. This system has a different approach: the commands are given to the brain unit with a higher level of logic. It can then, by merging and splitting the robot units, adapt its body to respond in an appropriate way to its environment.
The researchers hope that more flexible robotics systems will be able to solve many of the problems they encounter, relying on vast computing power to mimic the process of evolution that adapts natural bodies to their environments. In the paper from Nature Communications that describes the research, they conclude:
“Our vision is that, in the future, robots will no longer be designed and built for a particular task. Instead, we will design composable robotic units that give robots the flexibility to autonomously adapt their capabilities, shape and size to changing task requirements.”
Currently, the system is ten fairly small units that can cooperate with each other. But perhaps, in the future, we won’t need to create twenty specialized robots for twenty different tasks: we’ll only need one.
||Robot Pets Are On Their Way - But Simulating Your Dog’s Brai...|
||Artificial Intelligence Will Speak Its Own Language Soon|
||Is SpotMini The Future Of Robotics? Boston Dynamics Is Worki...|