Watching Boston Dynamics’ Atlas run through training drills, or seeing the latest humanoids from Figure load a washing machine, it’s easy to assume the robot revolution has already arrived.
From the outside, the final hurdle appears to be polishing the AI (artificial intelligence) software so these machines can cope with the complexity of real-world settings. Yet the biggest names in the sector recognise a more fundamental obstacle. In a recent appeal for research collaborations, Sony’s robotics division pointed to a central limitation affecting its own robots.
Sony observed that current humanoid and animal-inspired machines have a "limited number of joints". That restriction creates a "disparity between their movements and those of the subjects they imitate, significantly diminishing their … value". To address this, Sony is calling for new "flexible structural mechanisms"-in other words, smarter bodies that can produce the dynamic motion robots still lack.
At the heart of the problem is how many humanoid robots are conceived: built around software that controls everything from a central point. This "brain-first" philosophy tends to yield machines that move in ways that feel physically unnatural.
A trained athlete is graceful and efficient because the body works as an integrated system of compliant joints, flexible spines and spring-like tendons. A humanoid robot, by contrast, is typically a stiff framework of metal and motors, linked by joints that offer only limited degrees of freedom.
Because they must constantly counter their own weight and inertia, robots make millions of minute, energy-intensive adjustments every second simply to stay upright. The consequence is stark: even the most capable humanoids often manage only a few hours of work before their batteries run flat.
Consider a concrete comparison. Tesla’s Optimus uses roughly 500 watts of power per second to perform a basic walk. A person can manage a more demanding brisk walk on about 310 watts per second. That means the robot is expending nearly 45 percent more energy to complete an easier task-an inefficiency that is hard to ignore.
Diminishing returns
Does that imply the industry is fundamentally heading in the wrong direction? In terms of its underlying approach, yes.
When bodies are mechanically awkward, they require a supercomputer-like brain and a large number of powerful actuators. Those demands add mass and increase energy consumption, which only intensifies the very constraints engineers are trying to overcome. Advances in AI may look astonishing, but they can still deliver diminishing returns.
Take Tesla’s Optimus again: it can fold a t-shirt, which is undoubtedly impressive. But the demo also exposes the robot’s physical shortcomings. A human can fold a t-shirt with minimal visual attention, using touch to sense the fabric and steer the hands.
Optimus, with hands that are comparatively rigid and limited in sensing, leans on computer vision and its AI brain to plan each tiny movement with care. Faced with a crumpled shirt on an untidy bed, it would likely struggle-not because it lacks intelligence in the abstract, but because its body doesn’t have the physical intelligence needed to adapt to the real world’s unpredictability.
Boston Dynamics’ latest all-electric Atlas looks even more extraordinary, moving with a range that can seem almost otherworldly. However, viral acrobatics clips mainly highlight what it can do, not what it cannot.
For example, it would not be able to stride confidently over a mossy rock, because its feet cannot sense and conform to the surface. Nor could it force its way through a thick tangle of branches, because its body cannot give way and then rebound. This helps explain why, after years of progress, such machines are still largely research platforms rather than widely deployed commercial products.
So why aren’t the leading companies already fully committed to this alternative way of thinking?
One plausible explanation is that many of today’s most prominent robotics organisations are, at their core, software and AI businesses. Their strengths lie in solving challenges through computation, and their global supply chains are tuned to that model-high-precision motors, sensors and processors.
By contrast, creating robot bodies with genuine physical intelligence calls for a different industrial base, grounded in advanced materials and biomechanics, and that ecosystem is not yet developed enough to operate at scale.
And when the hardware already looks so impressive, it is tempting to assume the next software update will fix what remains-rather than taking on the expensive, difficult work of redesigning the body and rebuilding the supply chain required to manufacture it.
Autonomous bodies: mechanical intelligence (MI) for humanoid robots
This is precisely the territory of mechanical intelligence (MI), an area being explored by many academic groups worldwide, including my team at London South Bank University. The premise is simple: nature solved the problem of intelligent bodies millions of years ago.
Much of that success rests on a concept known as morphological computation-the idea that physical form can carry out complex “calculations” on its own.
A pine cone demonstrates this elegantly. Its scales open when conditions are dry so seeds can be released, then close again when it becomes damp to protect them. No brain, no sensors, and no motor-just a mechanical response to humidity.
Similarly, the tendons in a running hare’s legs behave like intelligent springs. When a foot strikes the ground, they absorb shock passively, then return stored energy to keep the gait stable and efficient, reducing how much effort the muscles must provide.
Now consider the human hand. Soft tissue gives it passive intelligence: it naturally moulds around whatever it grips. Our fingertips also function like a smart lubricator, adjusting moisture to achieve an ideal level of friction for different surfaces.
If those two characteristics were engineered into an Optimus hand, the robot could grip objects using a fraction of the force-and therefore energy-it needs today. In that sense, the skin itself would take on part of the computing role.
MI focuses on shaping a machine’s physical structure so it can adapt automatically and passively-responding to the environment without requiring active sensors, processors, or extra energy.
Escaping the humanoid trap does not mean giving up on ambitious humanoid forms. Instead, it means building them around this different principle. With a physically intelligent body, the AI brain can concentrate on what it does best: high-level strategy, learning, and richer interaction with the world.
Evidence for this approach is already emerging. For example, robots built with spring-like legs that replicate the energy-storing tendons of a cheetah have demonstrated striking running efficiency.
My own research group is working on hybrid hinges, among other developments. These aim to merge the pinpoint accuracy and strength of a rigid joint with the adaptive, shock-absorbing behaviour of a compliant one. In a humanoid robot, that could translate into a shoulder or knee that behaves more like its human equivalent, enabling multiple degrees of freedom and more complex, life-like movement.
Robotics’ future is not a contest between hardware and software, but a combination of the two. By adopting MI, we can build a new generation of machines that can finally move beyond the lab and operate confidently in our world.
Hamed Rajabi, Director of Mechanical Intelligence (MI) Research Group, London South Bank University
This article is republished from The Conversation under a Creative Commons licence. Read the original article.
Comments
No comments yet. Be the first to comment!
Leave a Comment