Skip to Content

The BMW quantum challenge led us towards a possible adoption roadmap for complex manufacturer

Edmund Owen
18 Nov 2022

The excitement around quantum technology is tempered by some pretty formidable factors. Not only is it expensive, but it can also be difficult to predict when it will start to deliver commercially relevant results.

Competing in the recent BMW Group Quantum Computing Challenge gave us the chance to delve into this conundrum – and start to light the way for manufacturers who need to know when quantum technology will overtake classical approaches in terms of performance and cost.

Our conceptual work on this evaluative road-mapping framework came during the second round of the BMW Group’s global crowd-sourced innovation initiative. I’m a quantum physicist here at CC and joined forces with colleagues from the wider Capgemini Group as part of Capgemini’s Quantum Lab to tackle the organisers’ use case challenge on Machine Learning (ML) for Automated Quality Assessment. You can read more about the main thrust of our activities, which focused on a holistic approach to developing a quantum ML algorithm.

After successfully qualifying as a finalist – a heartening achievement given the fact that the challenge attracted around 70 submissions from around the world – our team decided to invest time in pushing the boundaries of our thinking. Specifically, we explored ways to assess the ML model’s viability and scalability, with a view to forging a roadmap for quantum technology adoption by the BMW Group.

We compared various algorithms to establish the size of quantum platform that would be needed to outperform a classical solution. The team also drew on its broad expertise in AI, ML and quantum engineering to assess the feasibility of transferring classical ML approaches and tools to quantum platforms.

Transferability between classical and quantum platforms

I’ll dig a little deeper into the detail in a moment, but essentially our key finding was that the approaches and tools were indeed transferable between quantum and classical platforms. The next step was to develop a framework for testing and comparing the two, to establish the size of platform needed for one to outperform the other. The idea is that by mapping this with the development plans of companies, it would become possible to predict the time when quantum displaces classical.

There wasn’t an opportunity during the time limitations of the challenge to develop the concept in more detail, but I certainly believe that it represents a routeway to road-mapping for any manufacturing company contemplating the adoption of quantum technology.

For the more technically minded, here’s a closer look at the metric we used. Capacity measures the ability of an ML model to approximate functions. For instance, fitting a straight line to a set of points is not able to model non-linear relationships between variables. A higher order polynomial, such as a quadratic function, can capture a greater range of functions, and therefore has higher capacity. Models with high capacity are better at learning complex features within a data set, allowing them to better differentiate between categories such as fractured versus normal car parts on a production line. However there are drawbacks as we outline below.

It was by benchmarking classical and quantum algorithms, using the same metrics, that enabled our approach to indicate that the quantum version would outperform its classical counterpart when run on systems of an equivalent size. Quantum computers are still small, but engineering breakthroughs are rapidly pushing the boundaries of what’s possible. It’s reasonable to predict that it’s only a matter of time before quantum algorithms outperform today’s classical computers.

Our submission to the challenge only considered one aspect of the proposed algorithm. A complete analysis would assess the trade-offs between a variety of factors such as cost, expected quantum computing development timelines and other ML measures. It is by identifying trends in algorithm performance as a function of computing power and cost that would enable companies to identify when they should plan to incorporate a given quantum algorithm into their processes.

It was a real thrill to take the lead on these ideas, which sprang from the BMW Group’s request for team Capgemini to consider scaling and algorithm comparison from a theoretical perspective during the second phase of the challenge. And it was great to bring in my CC colleagues Joseph Tedds and Jacob Swain to add their insights and expertise to the submission. As I mentioned in my previous blog on the BMW Group challenge, there’s plenty more content to catch up on from Capgemini’s perspective. READ HERE  Meanwhile, please email me if you have any questions or observations on the topics I’ve been exploring.

Author – Edmund Owen, with contributions from the Quantum team including Julian van Velzen, Christian Metzl, Barry Reese, Joseph Tedds and Jacob Swain 


 

Edmund Owen

Principal Quantum Physicist at Cambridge Consultants (Capgemini Invent)
Edmund combines his experience in modelling and quantum systems with the expertise of engineers, programmers and designers to develop quantum products that provide practical solutions to commercially and socially relevant problems.