Skip to Content
Innovation

A conversation with Daniela Rus

When AI meets robotics

Daniela Rus, Director,
Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT

Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Daniela’s research interests are in robotics, mobile computing, and data science. She is a Class of 2002 MacArthur Fellow, a fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of Artificial Intelligence (AAAI), and the Institute of Electrical and Electronics Engineers (IEEE), and a member of the National Academy of Engineering (NAE), and the American Academy of Arts and Sciences. She earned her PhD in Computer Science from Cornell University.


What inspired you to pursue a career in robotics and artificial intelligence (AI)?

I’ve always been drawn to the intersection of mathematics and computer science, but what really inspired me was the idea of computation that interacts with the physical world. Systems that are not just abstract or digital, but grounded in the messiness of materials, motion, and uncertainty. Unlike the clean, discrete world of traditional computation, the real world is continuous, noisy, and unpredictable. I found that challenge exciting and compelling.

Robotics and AI offered a way to explore that tension: to work on algorithms and models that must adapt, learn, and make decisions in the face of ambiguity. I liked that it was “mathy” but also physical. You could watch the output of your code translated into movement, interaction, or behavior.

A big part of my inspiration also came from science fiction. I have always been fascinated by the idea of intelligent machines as collaborators, explorers, and extensions of human capability. That evolved into a curiosity about how we might build systems that reason, act, and evolve in the real world.


How do you envision robotics transforming industry?

We’re entering a phase where robotics will move far beyond structured factory floors. We’ll see a shift from rigid, pre-programmed systems to intelligent, reconfigurable machines that can operate in dynamic environments, whether that’s a warehouse, a farm, a hospital, a home, or even a disaster zone. This will fundamentally reshape how we think about automation as a tool for augmenting and extending human capability.

In manufacturing and logistics, robots will no longer be limited to repetitive tasks. They’ll collaborate with humans, adapt to changes in workflows, and learn new skills without reprogramming. In healthcare, we’ll see robots that can assist with surgery, rehabilitation, or elder care. These robots will be responsive to the physical and emotional needs of individuals. In agriculture and construction, “soft” and autonomous systems will navigate off-road unstructured terrain, making decisions in real time based on sensor data and environmental cues.


Your work spans robotics, mobile computing, and data science. Where do these fields converge, and what new possibilities does this create?

These fields are converging in exciting and transformative ways. Robotics provides embodiment, meaning machines that sense and act in the physical world. Mobile computing brings connectivity, responsiveness, and access to distributed resources, enabling robots to operate flexibly in real time and in diverse environments. And data science adds the layer of intelligence, with algorithms capable of extracting patterns from rich sensor data, enabling learning from experience, and supporting predictive decision-making.

At their intersection, we’re seeing the rise of physically grounded intelligent systems, which are robots that can perform tasks and learn from the world, adapt to new contexts, and collaborate with humans and other machines. For example, mobile robots can now continuously collect environmental data, learn optimal behaviors from it, and update their policies on the fly, all while staying connected to cloud platforms or edge networks that support coordination and insight-sharing.

This convergence opens the door to new capabilities, from self-reconfiguring soft robots that adapt their forms and functions in real time, to autonomous systems that can operate in remote or unpredictable environments with minimal oversight. It’s about building systems that are adaptive, networked, and understand the world they move through.


You wrote a book on how robots and humans can work together. How do you foresee this collaboration evolving?

Increasingly, robots are teammates, rather than tools. As AI becomes more capable, robots will continue to take on repetitive physical tasks and assist with complex activities including decision-making, adaptation, and perception in dynamic environments. This evolution means that collaboration is about designing systems that respond to human intent, complement human strengths, and adapt to real-world complexity.

As AI meets robotics, I foresee this collaboration becoming more fluid and context-aware. Robots will learn from human behavior, understand social and environmental cues, and adjust their assistance accordingly, whether it’s a warehouse robot coordinating with a human worker, a surgical assistant anticipating a clinician’s next move, or a home robot learning a daily routine.

The key is building systems that are trustworthy, interpretable, and adaptable, so humans can rely on them, understand their limitations, and work alongside them with confidence. In the long term, I envision teams of humans and robots learning together, each bringing unique capabilities: humans with creativity and judgment, robots with endurance, precision, and data-driven insight. The result will be collaborative intelligence, where together, people and robots will be able to do more than people alone or robots alone.


How do you define “physical AI,” and what are the primary hurdles to its widespread implementation?

Physical AI refers to using AI’s capability to understand text, images, and other online data to make real-world machines smarter. It integrates AI into machines that interact with the physical world. That is, robots that go beyond executing pre-programmed motions, to adapting and learning in real time. It goes beyond digital intelligence by embedding learning, decision-making, and reasoning into systems that must deal with uncertainty, friction, noise, time, space, physics, and constraints. In other words, the messiness of the physical world.

We are developing the foundations of physical AI at the intersection of machine learning [ML], control theory, materials science, and embodied interaction. It’s about teaching machines to respond intelligently and adaptively, learning and improving over time.

The hurdles to implementing physical AI are significant. First, data is harder to acquire in the physical world because to do so is expensive, time-consuming, and it’s often incomplete or even unavailable. Second, in the physical world, we cannot have mistakes and hallucinations. Safety and reliability are harder to guarantee when AI directly affects physical motion. Third, most current AI architectures are not well suited to real-time, resource-constrained settings, nor to tasks that require spatio-temporal correlations. Also, co-designing intelligence with mechanical form, so that learning is distributed across sensors, actuators, and materials, is still in its early days.

These are some of the key challenges we need to tackle to build intelligent systems that are resilient, trustworthy, and grounded in the world we live in.


Can you help us understand liquid neural networks [LNNs]?

LNNs are a new class of AI model designed to be adaptive, compact, and interpretable, especially in dynamic environments such as robotics and mobile computing. Unlike traditional neural networks, which use fixed architectures and activation functions, liquid networks change their internal dynamics in response to inputs over time, much like biological neurons.

At their core, LNNs are continuous-time models, inspired by the differential equations in the neural system of small specie. This means they process data in a way that naturally adapts to variable inputs, making them especially effective for real-time, sensor-driven applications such as autonomous vehicles, drones, and wearable systems.

A key advantage of LNNs is their efficiency. They can often achieve strong performance with fewer parameters, use much less energy, and offer significantly faster inference than traditional networks such as transformers. They are also more interpretable, because their mathematical structure makes it easier to trace how inputs evolve through the system.

LNNs are a step toward trustworthy, adaptable AI, meaning systems that are not only powerful but also responsive to the real world, and more aligned with how humans and animals learn and react in dynamic environments.



What are the biggest misconceptions about AI and robotics today?

One major misconception is that AI and robotics are the same. While they’re deeply connected, they serve different functions. AI is about decision-making, learning, and pattern recognition, while robotics is about physical action and interaction. Real-world systems often combine the two, but it’s important to understand their distinct roles – and that progress in one doesn’t automatically solve challenges in the other.

Another common myth is that AI-equipped robots are close to human-level intelligence or autonomy. Today’s systems are highly specialised. A robot that performs well in a warehouse may fail completely in a home. Generalising across environments, tasks, and social contexts remains an open research frontier.

There’s also a tendency to assume that AI will replace humans outright, when in fact the most powerful systems are designed to augment human capabilities, not substitute them. Thinking in terms of “co-bots” or assistive intelligence better reflects the direction in which the field is heading.

People often overlook how much support in the form of infrastructure, data, and human supervision AI and robotics still require. Behind every smooth demo is a complex support system. Making these technologies scalable, safe, and trustworthy “in the wild” is still a work in progress.


How do you think we can balance the growing energy demands of AI with sustainable energy usage?

Balancing AI’s rapid growth with sustainable energy use requires technical innovation and system-level thinking. As AI models, especially foundation models and deep learning systems, become larger and more capable, their training and deployment can consume vast amounts of energy. The challenge is to ensure that AI’s benefits don’t come at the cost of environmental harm.

One key strategy is to develop more efficient AI architectures. For example, LNNs [see above] offer strong performance with fewer parameters and lower compute needs. They are well suited to real-time applications on edge devices.

Another strategy is to optimise AI models after training. Sparsity, quantisation, and model distillation are active areas of research that aim to reduce the computational footprint without sacrificing accuracy.

We also need to move intelligence closer to the data. Running compact AI models like LLNs on edge devices such as phones, sensors, and robots can dramatically reduce energy and bandwidth requirements. This is especially important for physical applications, where low-latency, on-device intelligence is both more sustainable and more responsive.

Sustainability must become a core metric of AI system design, alongside performance and accuracy. This includes optimising data centre operations, using renewable energy sources, and incorporating lifecycle assessments into AI development. By aligning AI innovation with environmental stewardship, we can ensure that intelligent systems benefit both people and the planet.


How can we design robotics and AI to serve humanity equitably?

On the technical side, achieving equity requires designing systems that are robust, adaptable, and resource-aware. This includes developing AI models that perform reliably across diverse environments, user groups, and data distributions, not just in ideal or well-resourced settings. It also means creating algorithms that can operate on low-power edge devices, enabling broad deployments and uses without relying on expensive infrastructure. Techniques like few-shot learning, transfer learning, and interpretable models are essential to building systems that can be customised and audited locally. In robotics, this means building hardware and control systems that are modular, maintainable, and affordable, making real-world capabilities accessible outside of elite labs or industrial applications. In short, equity must be engineered into the foundations, from datasets and models to physical components and user interfaces.


In your view, what role should policymakers, researchers, and the public play in shaping the responsible development and deployment of intelligent machines?

Robotics and AI must be intentionally designed with inclusion, access, and long-term social impact in mind, not just technical performance or profit. Teams with interdisciplinary expertise are more likely to build systems that reflect a broader range of human experiences and applications.

Equitable design also means thinking beyond “high-end” applications. AI and robotics should not be limited to autonomous vehicles or surgical assistants; they should also address public-interest challenges: access to education, elder care, environmental resilience, disaster response, and infrastructure repair. These are areas where commercial incentives may be negligible, but where there is great social value.

From a systems perspective, we need transparent, accountable, and explainable AI, especially when it is embedded in physical systems. This allows users to understand, trust, and challenge outcomes. At the same time, policies around data governance, labor impacts, and access to AI infrastructure are crucial to ensuring that benefits are felt widely.


Are you worried about machines gaining general intelligence? Are we anywhere close to it?

I think it’s important to distinguish between general intelligence as we imagine it in science fiction – machines that can do everything a human can and more – and of what today’s AI systems are capable. Despite rapid progress, we are not close to achieving true artificial general intelligence. Even the most powerful current models are highly specialised. They excel at pattern recognition, language modeling, or planning in well-defined environments. But they lack common sense, contextual understanding, and the flexible reasoning that even a child demonstrates.

That said, I don’t think we should be complacent. The systems we do have are increasingly influential, and they can already behave in ways that are surprising, opaque, and sometimes risky when deployed at scale, especially in high-stakes applications like healthcare, law enforcement, or autonomous systems. So, while I’m not worried about machines “waking up,” I am deeply invested in how we design, deploy, and govern increasingly capable AI.


What is one big breakthrough you hope to see realised in AI and robotics over the next five years?

One breakthrough I hope to see is the development of general-purpose, physically adaptive robots that can learn new skills and reconfigure their morphology to perform entirely different tasks without needing to be rebuilt or retrained from scratch. Imagine a system that can manipulate delicate surgical tools one day, then traverse rubble in a disaster zone the next, by adjusting both its form and function in response to its environment and goals.

To fulfill this vision would require advances in embodied learning, modular hardware, and adaptive control. But it would also require a broader shift, from narrowly optimised machines to self-improving systems that operate safely and robustly in the real world. If we can achieve that, robots that grow more capable through experience, grounded in physical reasoning and responsive to human needs, we’ll unlock applications we haven’t yet imagined.


What advice would you offer to young researchers and innovators who are passionate about pursuing careers in AI and robotics?

First, stay curious and keep asking foundational questions, not just about how systems work, but about why we build them, who they serve, and what impact they have. AI and robotics are interdisciplinary by nature, so embrace the messiness: combine math with mechanics, data with ethics, and theory with hands-on experimentation. The most exciting innovations often come from connecting ideas across fields.

Second, don’t be discouraged by how fast the field is moving. There’s a lot of noise, but there’s also room to go deep on problems that matter. Find a research question or application area that resonates with your values and commit to learning the fundamentals (e.g., algorithms, systems, physics) before chasing trends.

Third, collaboration is important. Whether you’re working on soft robotics or scalable ML infrastructure, real-world systems are built in teams. Seek out mentors and peers who challenge and support you. Be generous with your ideas and open to feedback.

Finally, remember that your voice and perspective matter. The future of AI and robotics isn’t fixed. It’s still being written. The most meaningful contributions won’t just be technical; they’ll be thoughtful, intentional, and centered on positive impact.


What advice would you give to young women who are aspiring to a career in AI and robotics?

First and foremost: you belong here. AI and robotics are reshaping the world, and it’s vital that the people designing these systems reflect the communities they serve. Your perspective is welcome and essential.

Don’t wait to feel “fully ready” before you jump in. These fields move fast and can feel overwhelming, but no one starts out knowing everything. Focus on building strong fundamentals in math and computing, and follow your curiosity, whether it’s in ML, hardware design, ethics, or applications like healthcare or climate. Interdisciplinary thinking is a strength, not a detour.

Surround yourself with mentors, peers, and communities that support you. There will be moments of self-doubt. What matters is persistence, passion, and finding people who believe in your growth. Don’t be afraid to ask questions, take up space, and contribute your ideas.

Remember that success isn’t just about what you build, but why you’re building it. Let your values and imagination guide you.


Stay informed

Subscribe to have the latest reports from the Capgemini Research Institute delivered direct to your inbox.

Further reading

AI-powered everything

Your gateway to cutting-edge innovation

Annika Ölme, CTO, SKF Group

Conversations for Tomorrow

This quarterly review is Capgemini’s flagship publication targeted at a global audience. It showcases diverse perspectives from best-in-class global brands, leading public figures, academics and influencers on a chosen theme. We feature a wide variety of content, including interviews, articles by guest contributors, and insights from some of the Institute’s reports. Within such wealth and diversity of these global industry leaders’ opinions, there is something for everyone. We warmly invite you to explore.

Generative AI driving transformations within businesses