AI is mainly based on machine learning algorithms that learn from data – the underlying approach is data science. With this in mind, let’s start with a crash course on data science.
Crash course #1 – data science
It is a truth universally acknowledged that company performance depends on several factors, and that many of those factors are variable. If life is prone to inconsistency, so is business. Much of this is because of the unpredictability of human behavior, which is why it is interesting to explore alternative approaches to grasp them.
Data science provides a data-based approach for managing business performance and to accommodating these twists and turns. To build a model, data scientists don’t need to create ingenious bespoke algorithms, as is commonly believed. Instead, they need to follow a pragmatic process.
Building the model is like assembling gears to create a mechanism that can be applied to data. The only systematic and consistent approach is the scientific method – in other words, an inductive and iterative process. We make assumptions from the data about the mechanisms mechanisms – explaining the fluctuations and correlations we observed – and then we identify the models that could reproduce these observations.
We then check the assumption by testing on new, real data, and if the hypothesis is wrong, we have to follow this process again and again until we arrive at a good model. This process reveals a kind of “chicken and the egg” dilemma (see Figure 1) between data and model – data is needed to determine the model, and the model is necessary to leverage the data and reveal its value.
There are actually two extreme cases. The first one corresponds to the situation in which data quality is so poor that we cannot determine the right model to explain it. The other extreme case corresponds to the ideal situation, in which we have perfect data, but the fluctuations and correlations are so subtle and parsimonious that one cannot not determine which combination of models can explain them. The success of the approach depends on the talent of data scientists and their ability to be genuinely inductive, relevant to, and creative with the data.
Many such data scientists have typically studied physics, mathematics, or engineering sciences; overlaid on this is a sound knowledge of business, enabling them to leverage data and interpret the model that underlies it. Technology’s supporting role is to facilitate the outcome.
Figure 1. The data/model dilemma
Crash course #2 – machine learning
An important aspect of this technology is machine learning. This is traditionally defined as a form of AI that enables computer systems to learn without being explicitly programmed.
There are three main types of learning process:
- Unsupervised learning – in which the machine learning algorithm learns from data that is not labeled by humans. For instance, clustering algorithms may summarize data into a small number of clusters in which data is grouped according to a common measure of its similarity
- Supervised learning – in which the machine learning algorithm learns from data that are labeled by humans. For example, algorithms may predict a state (no or yes, i.e., 0 or 1) or a quantity associated with a combination of data variables
- Reinforcement learning – in which the algorithm learns from data in order to maximize a reward. An example might be an algorithm that learns to play chess by winning or losing.
Machine learning is also called statistical learning. Contrary to natural intelligence, it needs a huge amount of data from which to learn. Although a child learns to identify cats and dogs with only a few examples, “deep learning” algorithms need many, many more.
In a business context, data scientists gather data that are representative of business operations. This data sample should be large enough to be statistically significant, enabling it to be split into three data sets (see Figure 2).
Figure 2. The three data sets needed for machine learning
The first, the Learning Set, is the set to which we apply the scientific method that we described above. With this data set, we identify the model and the features that are important.
The second data set, the Validation Set, can then be used to fine-tune the model so as to avoid overfitting issues; and the third and final data set, which we might call the Testing Set, is used to check the predictive power of the model on data it hasn’t yet encountered.
Overfitting? What’s that? Before we answer this question, let’s look at underfitting. This is when a model is conceived that bears little relation to the data to which it’s applied. It’s not able to explain anything. It’s rather like the “idiot goldfish,” which can neither learn nor remember. It can’t explain the value of data, or account for its fluctuations.
Overfitting is the opposite – the model assiduously aims to accommodate all the data points. This makes it dangerous, because the predictions of the model for a new data input that has not been used during the learning process can be dramatically wrong. This situation corresponds to a model that has learned by heart all the data during the learning process. Faced with new data, it is simply unable to make right predictions, like a student who didn’t understand what he or she learned.
The role of the Validation Set of data is to help find the model in the middle way between these two extreme cases, so that it effectively learns its input data almost, but not quite, by heart.
As machine learning algorithms are at the basis of the current state of AI, we understand that machines performing AI will systematically make errors. This is why it is still easy to distinguish between machines and humans. Let’s consider the example of Captcha test. Each picture can be identified with an accuracy that is always less than 100% – say, 90%, or 0.9. If you have 3 x 3 = 9 pictures to classify, this means that the accuracy to do it well is something like (90%) ^ 9 = 38%! This is why Captcha tests are good at distinguishing between humans and robots – errors occur at expected levels.
The Turing test is similar (see Figure 3). Even if recent buzz news deals with machines succeeding at the Turing test, this is actually more about artificial stupidity than artificial intelligence. One can easily identify a machine by asking something like: “What is the logarithm of the age of universe multiplied by pi?” A machine would attempt an answer; a person would probably shrug and laugh.
Figure 3. The Turing test