Moravec’s Paradox: What Self-Driving and VR Can Teach Us

Here’s how ensuring governance and risk management strategies for technologies such as self-driving and AR/VR can set up a solid groundwork in a world where Moravec’s paradox is resolved, and AI systems become more capable.

Smita Rajmohan, Senior Counsel, Autodesk, Inc.

May 17, 2024

4 Min Read
autonomous car,self driving auto,flat vector illustration
Aleksei Naumov via Alamy Stock

Moravec's paradox has long been a puzzling phenomenon in the field of artificial intelligence. Named after AI researchers Hans Moravec and Rodney Brooks, the paradox is that tasks that are easy for humans, such as walking or recognizing faces, are extremely difficult for their AI systems. Conversely, tasks that are difficult for humans, such as complex mathematical calculations, are often routine tasks for AI. 

However, residents of San Francisco will tell you that they are eagerly signed up on a waitlist (many months long) for a coveted ride in one of the more popular self-driving cars (Waymo). Self-driving is one way in which we have sought to resolve Moravec's paradox -- an imitation of an actual human activity that a 16-year-old can figure out in 20 hours. 

There are many lessons we can likely learn from self-driving that can apply to understanding how we can make applications of artificial intelligence that are useful beyond natural language processing. Many experts like Yann LeCunn have remarked on how even video generation is challenging for AI in its current state. Having been in a self-driving car myself, I must admit I was impressed by how it maneuvered San Francisco’s tight alleys and bustling crowds. 

Self-driving cars require many things like sensors, LiDar, radar to work together which is no mean feat. They also need to mimic human risk-assessment and intuition to predict the behavior of pedestrians and other vehicles on the road. To get to safe self-driving, cars need to be exposed to many different scenarios and data points. They even need to be able to adapt to weather patterns. A lot of this adaptability, intuition and common sense are key to us being able to enable AI in robotics to be useful in high-risk situations. 

Related:Software-Defined Vehicles: Ready to Hit the Road

Another branch of technology applications that’s helping us make advancements in getting closer to this is virtual reality and augmented reality. AR and VR open up simulated environments through which interactions between objects, humans, and AI can be studied in a low-risk environment, for e.g., we can study how unmanned delivery drones would behave in a New York suburb in virtual reality, practice complex surgeries or make predictions with respect to space exploration. This in turn allows for real-time feedback that can help train AI to learn from user behavior.  

Allison Gopnik’s paper does a great job of explaining why large language models must maintain elements of curiosity and exploration that are inherently characteristics of young children. Perhaps learning how humans learn is a good step in helping us build AI that can be truly creative and think like us, conjure up counterfactual scenarios, and move beyond basic language and voice tasks that most people use generative AI for. 

Related:Sustainable Transportation Takes Off

The good news is that we can prepare for liability and risks for the time when Moravec’s paradox is resolved. The time for us to do our homework is now. There are numerous product liability issues with hardware devices generally, but when such hardware is making decisions in a truly automated fashion without human feedback or input, the issues of liability for harms get murky and complicated.  

Data privacy issues are already on the radar for many regulators and policymakers, but closer attention may be needed for robotics and other complex AI agents where training on large volumes of appropriate data is not only beneficial for the accuracy and performance of the AI model but also contributes greatly to product safety. Liability for harms caused in VR environments can also be confusing. Figuring out informed consent protocols and the boundaries and standards for human and machine interaction will likely be a necessary project for many product and privacy professionals.  

Self-driving cars and other robotics are prone to hacking and threats of cybersecurity vulnerabilities. Preparing for these potential attacks from bad actors and building resiliency into products is a good step in future proofing for more complex technology that is fully autonomous. One imagines that the insurance industry will likely also need to consider how they will offer products that accurately capture risk and probabilities of harm from AI agents, specifically in the healthcare, climate risk, or financial sector.  

Related:Overcoming AI’s 5 Biggest Roadblocks

Lastly, regulation will need to keep up with the transformation from AI-assisted technology to fully autonomous technology, keeping in mind user behavior and socio-cultural context while building risk frameworks especially for technologies geared toward more vulnerable communities, such as those differently abled or children. 

Ensuring governance and risk management strategies for technologies such as self-driving and AR/VR can set up a solid groundwork in a world where Moravec’s paradox is resolved, and AI systems become more capable. In the meantime, here’s hoping AI can one day do my dishes! 

About the Author(s)

Smita Rajmohan

Senior Counsel, Autodesk, Inc.

Smita Rajmohan is a technology and AI attorney in Silicon Valley focused on responsible AI. Smita sits on the AI Policy Committee of the IEEE USA and is representing IEEE at the NIST US AI Safety Consortium. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights