Skip to Content

Putting the “quantum” into machine learning

Barry Reese
22 Sep 2022

Applying quantum to machine learning for quality assessment, part of the BMW Group’s Quantum Computing Challenge, we found that combining classic and quantum approaches makes it possible to build a more accurate model, using less training data.

In the first article in this series, Our holistic approach to the BMW Group’s quantum computing challenge, we outlined Capgemini’s fruitful participation in the BMW Group’s Quantum Computing Challenge, and our holistic approach to applying quantum to automated quality assessment. In this blog, I’ll focus on our work around the BMW Group’s specific requirement to investigate the relevance of quantum techniques and technologies to machine learning (ML) in the context of quality assessment.

I was asked to lead this element of the project because I’ve worked for more than 10 years on artificial intelligence and ML, implementing them within the automotive industry, among other sectors. For the published use case, I collaborated with members of Capgemini’s established community of quantum experts as well as automotive specialists.

Can quantum techniques enhance machine learning?

Vehicle manufacturing plants need to check all industrial components for flaws such as cracks. These flaws are rare, but the impact of missing one in the multi-stage manufacturing process would be serious. Classical (i.e. non-quantum) ML can partially automate the task by reviewing camera or infrared images and assigning each one to either “good” or “defective” categories.

We wanted to know whether quantum machine learning (QML) could help overcome two major limitations of classical ML in this context. One limitation is that auto manufacturers are typically looking for exceedingly small defects in exceedingly large high-resolution images, which is computationally expensive. A second limitation relates to the fact that any ML model has to be fed with appropriate data in order to learn – but because current automotive quality processes are already so good (though not perfect), it can take years to accumulate enough real-life flaw data examples to train a classical ML model.

Combining quantum and classical

To explore whether quantum can help, we took a classical convolutional neural network (CNN) – today’s most common image classification tool – and combined it with a “quanvolutional” neural network (QNN) as explained by Reese et al. We decided to split each large image of a component into small parts. All these parts are fed into a QNN layer that pre-processes the image. The output from this layer is in turn fed into a classical CNN that predicts whether the part is defective.

The benefit of pre-processing in a QNN layer is that it allows us to take advantage of quantum concepts, such as entanglement, to add “depth” to the image by embedding features selected through the quantum circuit. Using quantum kernels instead of classical ones, we can find patterns that were otherwise not found. In the following images (Figure 1), one can see the effect of the quantum preprocessing in the enriched quantum images. This means that the ML model can consider many additional possibilities and can learn faster from the enhanced data. 

The reason for splitting the image is to overcome the fact that large volumes of data can’t be loaded into a quantum device without sacrificing resolution. This is a current hardware limitation that is likely to remain for the foreseeable future.

The effect of image splitting and our quantum circuit

Figure 1. The effect of image splitting and our quantum circuit

An innovative approach to image processing

Tested against the benchmark of an unenhanced classical CNN, the combined model exceeds expectations. It learns to generalize faster based on less input data and achieves above 97% accuracy compared to 80% in our benchmark classical model. Our quantum model is able to achieve this level of accuracy with 40% train / 60% test, whereas a typical classical model needs 70% train / 30% test. By needing fewer training images and locating the flaws more accurately with less data, it overcomes the two limitations of classical ML that concerned us because the most expensive part of this process is obtaining the training data and then locating the crack once it is identified.

This sustainable, efficient, and innovative approach to image processing could be useful in any situation where the outcomes are critically important and training data is expensive or scarce, or where there’s a new pattern of failure (for example, in a new component).

Since only pre-processing takes place on a quantum platform, this quantum-inspired solution is effectively a hybrid one and can be implemented mostly on classical machines. Therefore, we believe that using our approach, clients can benefit from quantum thinking and experimentation before quantum hardware matures fully.

Proud though we are of our QML achievement, the real power of our approach derives from our holistic perspective on the BMW Group’s challenge, which revealed unforeseen opportunities for application of other aspects of quantum technology. Our next blog will look at one of the most important of those opportunities: enhancing image capture and sensing through quantum.

Barry Reese

Quantum Machine Learning Lead
As a Quantum Machine Learning expert, my passion is finding solutions that improve the life and work of people. My mission is to investigate and build quantum applications on near-term quantum devices and to understand how quantum can be transformative for the future of computing. From quantum technologies, I am able to learn something new every day.