The MIT team has developed a control algorithm that autonomously teaches a reconfigurable robot to move, stretch, and adapt its shape for specific tasks.
Imagine a shape-shifting, slime-like robot designed to navigate through narrow spaces, with the potential to remove items from the human body. This innovative technology could revolutionize various fields, including healthcare and industrial systems, offering flexible and adaptive solutions. The main challenge is effectively developing a method to control this fluid, jointless machine.
The team at MIT have created a control algorithm that autonomously learns to move, stretch, and shape a reconfigurable robot to accomplish specific tasks, including those that require multiple morphological changes. Additionally, they developed a simulator to test control algorithms on various challenging tasks that demand the robot to alter its shape.
Their method outperformed other algorithms by completing eight complex tasks. For example, the robot adjusted its size and shape to navigate through a narrow pipe and open its lid. Still, in early development, this technique shows potential for creating general-purpose robots that can adapt their forms to accomplish diverse tasks.
Controlling dynamic motion
To control a shape-shifting robot, the team has developed a reinforcement learning algorithm that starts by managing groups of muscles working together rather than individually. This approach uses a coarse-to-fine strategy, where the robot’s movements in an environment are treated like an image. Their model creates a 2D action space from environmental images and uses the material-point method for simulating motion, assigning points over grid-like pixels. This helps the algorithm recognize strong correlations between action points like image pixels, ensuring coordinated movements across different robot parts. The model also predicts optimal robot actions based on environmental analysis, increasing adaptability and efficiency.
Building a simulator
After developing their approach, the researchers created a simulation environment called DittoGym to test it. DittoGym challenges a reconfigurable robot with eight tasks, like weaving around obstacles or mimicking alphabet letters by changing shape. Their algorithm outperformed baseline methods and uniquely succeeded in multistage tasks requiring several shape transformations.