IN A NUTSHELL |
|
From Rosie the robot maid in the Jetsons to the sophisticated C-3PO and R2-D2 in Star Wars, popular culture has long explored the concept of robots with bodies. However, the idea of disembodied artificial intelligence has also intrigued us, as evidenced by Joshua, the computer from WarGames. As AI continues to evolve, the question of whether these systems require a physical form to achieve true intelligence has become more pressing. This inquiry invites us to reconsider the role of embodiment in artificial intelligence and explore whether having a body might be key to achieving a more nuanced understanding of intelligence.
The Limits of Disembodied AI
Recent studies have begun to expose the limitations of today’s advanced, yet disembodied, AI systems. In particular, a study from Apple scrutinized “Large Reasoning Models” (LRMs), which are language models designed to generate reasoning steps before arriving at answers. While these systems outperform standard language models on various tasks, they struggle significantly when faced with complex problems. Alarmingly, they don’t just reach a performance plateau—they collapse, even when provided with ample computing resources.
Moreover, these models often fail to reason in a consistent or algorithmic manner. Their “reasoning traces,” or methods for problem-solving, lack internal logic. As challenges grow more intricate, the models appear to exert less effort. Consequently, these systems don’t “think” like humans. “What we are building now are things that take in words and predict the next most likely word,” Nick Frosst, a former Google researcher and co-founder of Cohere, told The New York Times. This starkly contrasts with human cognitive processes.
Cognition Is More Than Just Computation
For much of the 20th century, the field of artificial intelligence adhered to a paradigm known as GOFAI—or “Good Old-Fashioned Artificial Intelligence”—which treated cognition as symbolic logic. Early AI researchers believed intelligence could be built by processing symbols, akin to how computers execute code. This abstract, symbol-based approach does not necessitate a physical form.
Yet, as robot AI faltered in handling real-world complexities, experts in psychology, neuroscience, and philosophy began to question this model. They drew insights from studies of animal and plant intelligences, which adapt, learn, and respond to complex environmental stimuli. These organisms learn through physical interactions, not symbolic logic. In humans, for instance, the enteric nervous system in our gut is often called the “second brain” because it utilizes similar cells and chemicals as the brain to aid digestion—paralleling how an octopus tentacle senses and reacts autonomously.
These findings suggest that adaptable intelligence may be inherently distributed throughout an organism, not confined to a brain separated from the physical world. This is the essence of embodied cognition, where acting, sensing, and thinking are integrated processes. “Brains have always developed in the context of a body that interacts with the world to survive,” Rolf Pfeifer, Director of the University of Zurich’s Artificial Intelligence Laboratory, told EMBO Reports. “There is no algorithmic ether in which brains arise.”
Embodied Intelligence: A Different Kind of Thinking
To advance AI, we may need to develop smarter bodies, and Cecilia Laschi, a pioneer in soft robotics, believes that smarter means softer. After years of working with rigid humanoid robots in Japan, she shifted her focus to soft-bodied machines inspired by the octopus—an animal with no skeleton whose limbs operate independently.
“If you have a human robot walking, you control all different movements,” she explains in an interview with New Atlas. “If there is something different on the terrain, you have to reprogram a little bit.” In contrast, animals don’t need to rethink their movements. “Our knee is compliant,” she adds. “We compensate for uneven ground mechanically, without using the brain.” This embodies the concept of embodied intelligence, where certain cognitive functions are outsourced to the body.
From an engineering perspective, embodied intelligence offers distinct advantages. Offloading perception, control, and decision-making to a robot’s physical structure reduces computational demands on its central processing unit, enabling machines to function more effectively in unpredictable environments. A May special issue of Science Robotics defines this as motor control that is “partially shaped mechanically by external forces acting on the body.”
Flesh and Feedback: How to Make Materials Think for Themselves
To realize the potential of soft robotics, researchers must abandon programming for every conceivable scenario and instead design machines capable of sensing and reacting autonomously. This has led to the development of autonomous physical intelligence (API). Ximin He, Associate Professor of Materials Science and Engineering at UCLA, has pioneered work in this area by creating soft materials—such as responsive gels and polymers—that not only react to stimuli but also regulate their own movements using built-in feedback mechanisms.
“We’ve been working on creating more decision-making ability at the material level,” He tells New Atlas. “Materials that change shape in response to a stimulus can also ‘decide’ how to modulate that stimulus based on how they deform—correcting or adjusting their next motion.” In 2018, He’s lab demonstrated this with a gel capable of self-regulating its movement. Since then, they have expanded this principle to a range of soft materials, including liquid crystal elastomers that function efficiently in air.
The key to API lies in nonlinear time-lag feedback. Traditional robots rely on a control system to analyze sensory data and dictate actions. He’s approach embeds this logic directly in the materials themselves. “In robotics, you need sensing and actuation—but also decision-making between them,” he explains. “That’s what we’re embedding physically, using feedback loops.”
As we look to the future, soft robotics holds immense promise. Imagine surgical tools that can both examine and react to sensitive human tissue simultaneously or rehabilitation devices that dynamically adapt to a patient’s needs. To transition from AI to artificial general intelligence (AGI), machines may indeed require bodies—particularly soft and adaptable ones. While humans learn through moving, touching, and interacting with the chaotic world, today’s AIs struggle with this complexity. Could soft robotics be the key to unlocking a new realm of intelligence? How might machines with their own sensory experiences reshape our understanding of intelligence and our shared world?
Did you like it? 4.7/5 (28)
Fascinant! Mais est-ce que ça veut dire qu’Alexa va bientôt avoir des jambes? 😄
Je ne suis pas convaincu que donner un corps à l’IA résoudra tous ses problèmes.
Merci pour cet article intéressant, il me donne à réfléchir! 🧠
Pourquoi est-ce que les AI’s sans corps échouent dans les problèmès complexes?
La comparaison avec les pieuvres est intrigante. Peut-on vraiment s’inspirer des animaux pour construire des robots?
Est-ce que c’est éthique de donner un corps à une IA?
Je n’ai jamais pensé qu’un corps pourrait être nécessaire pour l’intelligence. C’est une idée révolutionnaire!
Peut-être que la prochaine étape est de donner une conscience aux robots? 🤔
Le concept de “deuxième cerveau” dans l’intestin est fascinant. Cela pourrait changer notre perception de l’intelligence!
Est-ce que des robots avec des corps seraient encore considérés comme de l’intelligence artificielle ou quelque chose de nouveau?
Bravo pour cet article, il ouvre vraiment des perspectives nouvelles sur l’avenir de l’IA!