Michael Schrage, MIT
Michael Schrage is a Research Fellow at the MIT Sloan School’s Initiative on the Digital Economy. He is the author of the books Serious Play (HBR Press), Who Do You Want Your Customers to Become? (HBR Press) and The Innovator’s Hypothesis (MIT Press). His latest book, The Innovator’s Hypothesis, looks at how to develop innovation in a quick, cost-effective way. Michael is a columnist for Harvard Business Review, Fortune, CIO Magazine and MIT’s Technology Review, and is widely published in the business press.
Capgemini’s Digital Transformation Institute spoke to Michael to gather his views on what AI means for large organizations.
Machine Learning more important than traditional AI
How would you define Artificial Intelligence?
We first have to distinguish traditional AI from machine learning. Many people understand “Artificial Intelligence” as software that appears to replicate, mimic, or effectively copy the human thinking or cognitive process. With ML, the algorithm is capable of learning both in a supervised and unsupervised way, based on the way it is trained on data. I believe that machine learning is really becoming a much more important pillar and organizing principle in algorithmic, systems, and process design. We see this in companies like Google, Facebook, and Netflix. We see this emerging in organizations that are exploring the Internet of Things.
Where do you think we are with the development of AI?
What AI already allows most organizations to do today is hollow out or replace ordinary cognitive work. On average, it is more economical, and makes more business sense, to invest in AI system that reliably manage data and processes in ‘above average’ ways rather than invest in ‘typical’ human beings or a groups. I believe that at the very top of the pyramid there are still measurable and significant human-being advantages. But right now, with AI in combination with machine learning programs, you can design business processes where the system consistently, reliably, and cost effectively outperforms its human counterparts. We see this in financial services; we’re seeing to more in manufacturing and industrial controls; we’re certainly seeing it in IT/digital services. It is a very bad time to be average in performance. Even being above average no longer commands a market premium.
AI offers significant opportunity for data leaders in a compliance-oriented world
Where do you think AI will have the most impact?
AI will be quicker to enter industries that are heavily regulated. That’s because compliance and regulatory-driven industries are very complicated, but they generate lots of data. Also, compliance costs is an overhead cost that must be reduced. So, wherever there is intense regulatory oversight – such as finance, energy, pharmaceuticals, etc. – you will see disproportionate investments in AI and ML.
How can organizations realize the transformative impact of AI?
Companies need to manage data as an asset to get meaningful and measurable economic returns. Start there. Every company is made up of three categories of data managers. Immature managers ask, ‘how do we manage this data?’ More mature managers ask, ‘how do we get value from this data?’ And superlative managers say, ‘what kind of partnerships, governance, and technological investments do I need to make to get best-in-class returns on this asset?’
AI success is a question of leadership
What separates companies that make a success of AI from the laggards?
AI leaders have a policy and process around data governance and treat data as an asset. How they oversee, and share data is critical to both efficiency and growth. They also have key problems or business cases that lend themselves to known structures for AI and ML algorithms, such as intrusion detection systems, trading in financial services, predicting churn from subscribers or identifying certain kinds of customers. These companies view AI as an enabler and they are ready to experiment. They’ve got the digital/IT infrastructures to do so.
How should organizations run AI initiatives – top-down or bottom-up?
Pockets of excellence almost always exist so that’s a tough question to answer without knowing the firm. Many financial services companies, for example, are doing excellent machine learning work and data science work. But it becomes like a ghetto. They cannot move it to other parts of the organization, because the other parts of the organization don’t understand it, or don’t have the talent, or are concerned that people might lose their jobs. These are human issues that have nothing to do with the capabilities of the technology and everything to do with the culture of the organization and the quality of its leadership.
The future of job and work
How do organizations get the right AI talent, and can they upskill their existing people?
Are organizations able to hire data scientists versus organizations like a Google or Netflix? If you are a tier-1 company, you probably have a decent chance at getting quality data science, machine learning and AI folks. But if you’re a tier-2 company, what do you offer besides lots and lots of money?
Organizations need to know what it costs to turn a good 40-year-old coder and developer into an ML or AI coder and developer. What portion of mid-career people can be converted in such a manner? I have no good answer to that. But I think that is the dominant human capital question going forward. I worry less about folks in their 20s, I worry about people in their 40s.
What kind of jobs do you think are at immediate risk of being disrupted by AI?
At this stage it is still not clear whether the level of job generation will match job destruction. I don’t know. It is challenging to say whether multiple professions will go away or whether how you create value in those professions fundamentally changes. The name may be the same but what you do and the technology you’re doing it with are different. For instance, the idea that radiologists in the year 2025 will be doing largely what they did in 2005 strikes me as difficult to believe. Similarly, in America, it is easy to imagine scenarios where judgments for certain kinds of arbitration or civil lawsuits are rendered algorithmically rather than by human beings making arguments in front of a judge.
More specifically, I believe that the compliance staff at pharma and financial services companies will be cut by between half and two-thirds within five to seven years. This is precisely because large organizations are using AI and ML to manage the compliance process and the lawyers who remain will largely supervise algorithmic outcomes rather than people. You won’t need nearly as many people do it.
Battle between process efficiency and user experience
How do you see AI evolving in the future?
I see a big battle in ML and AI between process efficiencies and user experience. Amazon, Netflix and Google are good examples of organizations that try to balance user experience with process efficiencies. The financial services companies and industrial companies are torn, because their first goal is improving their return on assets. This brings in a bias in their usage of ML and AI – how do we improve our internal process efficiencies as opposed to giving our customers and users a better experience and better value? I think that will be the great schism in the market over the next 5 to 10 years.