Artificial intelligence (AI) is a dragon with many heads. An increasing number of products include some kind of intelligence and nearly all businesses are trying to find out where to push more intelligence into their products and services with the promise of increasing market share and quality of decisions.
We often perceive AI as the artificial equivalent of human intelligence (although it can be more than that), which consists of at least the ability to learn while applying problem-solving capabilities. In practice, most of the AI-related solutions and products that exist today are exclusively performing some kind of machine learning (ML) to classify, categorize, or predict data and events. And sometimes with great success.
However, although these ML-based algorithms do a great job in classifying, categorizing, or predicting events based on historic data, transposing their outcomes into the real world for informed decision-making often proves to be problematic. As shown by certain AI failures, for example in the context of self-driving cars, trading bots in the financial market, automated recruitment, or self-learning chatbots, AI is not flawless by default. A correctly scaled AI application in production may require something more.
Even the latest and greatest ML algorithms only perform within a relatively limited context and setting, and the quality of their decisions tends to fall sharply when coming outside of their “area of competence,” which is sharply defined by the examples provided for training and model evaluation.
Since it is virtually impossible to cover all the situations that an applied ML model might have to handle in production, problem-solving capabilities could be added. “Problem-solving” refers to the ability to use contextual information and reason based on logical rules, analogy, and similarity with learned or remembered facts. In contrast with ML, which depends on large datasets, logical reasoning can be performed on single facts and observations alone, if required.
Putting ML and problem-solving capabilities together provides a very powerful combination for tomorrow’s data-powered enterprises where context and adaptiveness are key. Contextual knowledge plays an important role in increasing the quality of decisions made in the data-powered enterprise, as it extends AI’s area of application. In practice, this means including knowledge about specific relationships such as company structures, connections between products, knowledge about the real world, physical laws, country laws, or simply complex information from other parts of the organization.
By making more knowledge available about the context in which decisions are made, problem solving will become better informed and easier to explain. Recognizing whether ML models are applied correctly, or that there might be contextual information to consider, will ensure you keep your competitive edge.
The rise of graph technology
Currently, we are seeing a rise in global awareness around such issues in the field of AI. Recent advances in knowledge representation in distributed systems show promising results. Advances are based on using graph representation for capturing semantics using logic and making the results available as machine-readable contextual information.
Graphs have the additional advantage that they can be navigated fast. If equipped with the correct level of semantic representation, they can also be used to integrate knowledge coming from different sources in an automated and elegant manner as a basis for reasoning. So, graphs are a good way to represent contextual knowledge for use throughout the business and are well suited for publishing reference data in a solution-independent and future-proof way.
Independence: Reference data/ master data/ golden records should be governed, maintained, and published in representational formats that do not put unnecessary constraints on their use (which is difficult to foresee in a dynamic and rapidly changing world), preferably using open standards and protocols.
Flexibility: Graph models are less rigid than more traditional scheme-based methods, allowing virtually costless extensions to data models when new information becomes available.
Identifiability and merging: Representing reference data as knowledge graphs make them easily merged and aligned with other datasets which are based on knowledge graphs, due to their identifiable and reusable object representations on a global scale.
Accountability and trust: Knowledge graphs, when implemented correctly, make it possible to trace knowledge back to its origin, even when it is aligned into a “mesh” dataset.
Visualization: Human beings are known to be very visually oriented in their knowledge interpretation and processing, and knowledge graphs can be easily visualized in interpretable network representations.
Thought-provoking as it is, the development of data-powered enterprises will benefit greatly from mimicking human ways of storing and processing information. Graphs help to achieve exactly that.
At the moment, we are only on the brink of a development that is hard to foresee, and that presumably will exceed all our current expectations. Globally, we see the main actors in knowledge-intensive workflows in life science, manufacturing, financial services, the public sector, and retailing – they are all increasingly investing in upgrading the ways they represent contextual information and awareness. This boosts automation efforts through ML and ultimately will enable AI to realize its true potential.
Data-powered Innovation Takeaways
- Narrow focus: Increasingly powerful as their capabilities maybe, AI systems are restricted by the scope of the learning data they are provided with.
- Only human: Graph systems are based on the typical human way of navigating, accessing, and understanding data.
- Culture tool: Graphs are an effective, intuitive way to explore and discover data, helping the organization to improve its data culture.
- Dynamic duo: AI systems become more understandable, and more effective in their problem solving capabilities when combined with graph technologies.
About the Author
CTO Insights & Data Capgemini Europe
I bring the do´s and the don´ts of Artificial Intelligence to our European Key Accounts, oversee the market, create and contribute to key service offerings and am involved in partner selection & activation. I have a long term and deep interest in topics and tangible things related to machine learning and artificial intelligence. My wider interests include topics like Semantics, Knowledge Representation, Reasoning, Machine Learning (in all its different colours & shapes) and putting it together in more (or less) intelligent ways.
Where technology meets people, a background in cognitive psychology comes in handy. That’s where the fun starts, and that’s where I want to be. Utilizing, explaining, producing and creating scenarios, solutions and understanding for new challenges and situations where AI & ML come around the corner.
I hold a PhD in Machine Learning from the Technical University of Karlsruhe (now KIT). I’m also a regular keynote speaker and have published articles on various topics in artificial intelligence, machine learning, semantic web technology, information representation, knowledge management and computer linguistics.
You can contact me HERE