Skip to Content

A case for context awareness in AI

Robert Engels
04 Apr 2022

There have been catastrophic effects of AI use in self-driving cars, including road crashes, social media, and failures in critical infrastructures, making some ask: can we trust AI in production?

Also, what can we do to make AI more robust while operating in dynamic surroundings and, most importantly, how can we make AI understand the real world?

Does applied AI have the necessary insights to tackle even the slightest (unlearned or unseen) change in context of the world surrounding it? In discussions, AI often equals deep-learning models. Current deep-learning methods heavily depend on the presumption of “independent and identically distributed” data to learn from, something which has serious implications for the robustness and transferability of models. Despite very good results on classification tasks, regression, and pattern encoding, current deep-learning methods are failing to tackle the difficult and open problem of generalization and abstraction across problems. Both are prerequisites for general learning and explanation capabilities.

There is great optimism that deep-learning algorithms, as a specific type of neural network, will be able to close in on “real AI” if only it is further developed and scaled up enough (Yoshua Bengio, 2018). Others feel that current AI approaches are merely a smart encoding of a general distribution into a deep-learning networks’ parameters, and regard it as insufficient to act independently within the real world. So, where are the real intelligent behaviors, as in the ability to recognize problems and plan for solving them and understand the physics, logic, causality, and analogy?

“THERE IS A NEED FOR CONTEXTUAL KNOWLEDGE IN ORDER TO MAKE APPLIED AI MODELS TRUSTABLE AND ROBUST IN CHANGING ENVIRONMENTS.”

Understanding the real world

What is needed is a better understanding by machines of their context, as in the surrounding world and its inner workings. Only then can machines capture, interpret, and act upon previously unseen situations. This will require the following:

  • Understanding of logical constructs as causality (as opposed to correlation). If it rains, you put on a raincoat, but putting on a raincoat does not stop the rain. Current ML struggles to learn causality. Being able to represent and model causality will to a large extent facilitate better explanations and understanding of decisions made by ML models.
  • The ability to tackle counterfactuals, such as “if a crane has no counterweight, it will topple over.”
  • Transferability of learned “knowledge” across/between domains; current transfer learning only works on small tasks with large domain overlap between them, which means similar tasks in similar domains.
  • Withstand adverse attacks. Only small random changes made in the input data (deliberately or not) can make the results of connectionist models highly unreliable. Abstraction mechanisms might be a solution to this issue.
  • Reasoning on possible outcomes, finding problematic outcomes and
    a) plan for avoiding them while reaching the goal
    or b) if that is not possible, find alternative goals and try to reach those.

In the first edition of this review, we already made the case for extending the context in which AI models are operating, using a specific type of model that can benefit from domain knowledge in the form of knowledge graphs. From the above, it follows that knowledge alone probably will not be enough. Higher-level abstraction and reasoning capabilities are also needed. Current approaches aim at combining “connectionist” approaches with logical theory.

  1. Some connectionists feel that abstraction capability will follow automatically from scaling up networks, adding computing power, and using more data. But it seems that deep-learning models cannot abstract or generalize more than learning general distributions. The output will at the most be a better encoding but still not deliver symbolic abstraction, causality, or showing reasoning capabilities.
  2. Symbolic AI advocates concepts as abstracted symbols, logic, and reasoning. Symbolic methods allow for learning and understanding humanmade social constructs like law, jurisprudence, country, state, religion, and culture. Could connectionist methods be “symbolized” to provide the capabilities as mentioned above?
  3. Several innovative directions can be found in trying to merge methods into hybrid approaches consisting of multiple layers or capabilities.
  • Intuition layer: Let deep-learning algorithms take care of the low-level modeling of intuition or tacit skills shown by people having performed tasks over a long time, like a good welder who can hardly explain how she makes the perfect weld after years of experience.
  • Rationality layer: The skill-based learning where explicit learning by conveying rules and symbols to a “learner” plays a role, as in a child told by her mother not to get too close to the edge. A single example, not even experienced, might be enough to learn for life. Assimilating such explicit knowledge can steer and guide execution cycles which, “through acting,” can create “tacit skills” within a different execution domain as the original layer.
  • Logical layer: Logics to represent causality, analogy, and providing explanations
  • Planning and problem-solving layer: A problem is understood, a final goal is defined, and the problem is divided into sub-domains/problems which lead to a chain of ordered tasks to be executed, monitored (with intuition and rationality), and adapted.

 In general, ML models that incorporate or learn structural knowledge of an environment have been shown to be more efficient and generalize better. Some great examples of applications are not difficult to find, with the Neuro-Symbolic AI by MIT-IBM Watson lab as a good demonstration of how hybrid approaches (like NSQA in this case) can be utilized for learning in a connectionist way while preserving and utilizing the benefits of full-order logics in enhanced query answering in knowledge-intensive domains like medicine. The NSQA system allows for complex query-answering, learns along, and understands relations and causality while being able to explain results.

The latest developments in applied AI show that we get far by learning from observations and empirical data, but there is a need for contextual knowledge in order to make applied AI models trustable and robust in changing environments.

INNOVATION TAKEAWAYS

HYBRID APPROACHES are needed to model and use causality, counterfactual thinking, problem solving, and structural knowledge of context.
NEURAL-SYMBOLIC PROCESSING combines the benefits of connectionist and symbolic approaches to solve issues of trust, proof, and explainability.
CONTEXTUAL KNOWLEDGE AI needs modeling more of the world to be able to understand the physics and logic, causality, and analogy in the surrounding world.

Interesting read?

Data-powered Innovation Review | Wave 3 features 15 such articles crafted by leading Capgemini and partner experts in data, sharing their life-long experience and vision in innovation. In addition, several articles are in collaboration with key technology partners such as Google, Snowflake, Informatica, Altair, A21 Labs, and Zelros to reimagine what’s possible.