BPO Thought Process

BPO Thought Process

Opinions expressed on this blog reflect the writer’s views and not the position of the Capgemini Group

Artificial Intelligence – An Illusion or a Reality?

Category : Analytics
Everywhere I turn, I seem to encounter discussions on Machine Learning and Artificial Intelligence (AI). The Nasscom conference on Big Data and Analytics in June was heavily AI focused.  The cover of last week’s issue of The Economist reads “March of the Machines” with a special report on AI. Analytics websites are full of it. 
 
So why is the analytics community so upbeat about this technology?
 
While the earlier phase of robotics was primarily driven by the technology community, the  analytics community along with the technology one is equally evolving the AI landscape. The reason for this is fundamental: 
 
Robotics was about embedding rules into technology to automate processes, while AI is about embedding analytics into the process through technology. In a hypothetical scenario where every process has AI in it, there would be no need for analytics outside of the process because all data would be analyzed at source and the actions based on that analysis already taken! 
 
Take for example the recommendation engine in Amazon. The machine looks at what you have searched for, learns your preferences, stores it, compares it to others with similar preferences, analyzes it and converts all this data into action by showing you what could be of most interest to you next time you log in. Every time a user enters Amazon, the system learns. No human intervention, no analysis and no action outside of the system. 
 
When I mention AI here, I am using it only in the context of machine learning and narrow AI. We went from automation to robotics based on rule-based learning. You tell the robot what to do and it does it. Now in the new world, you give the machine training data based on what humans have done earlier and it learns and decides what to do; or you give the machine an objective and set the parameters and through millions of iterations, it decides the best way to do it. 
 
While the former is something that humans can comprehend, the latter is somewhat mind-boggling. The reason is that the machine often finds ways to do things that humans have never thought of or could not do earlier due to “computational constraints”. To take that one step further, the machine finds ways to reach the end goal that humans are not even able to comprehend after the fact, i.e., since it has done millions of iterations, humans cannot trace back to “how” the machine achieved the objective. Microsoft’s CEO, Satya Nadella, has come up with 10 commandments last week for how humans and machines should work together and one of them is that AI must be transparent and intelligible rather than just intelligent!
 
So why is all this relevant to a normal enterprise? Is it all buzz like it has been for 30 years or is there something different this time? 
 
The short answer is that after decades of lingering in the corridors of technology, AI has finally made its way into real life. I am not talking about the Google and Tesla cars or the chess and AlphaGo wonders, I am talking about basic functions such as online shopping, marketing, supply chain, and manufacturing. Every time we log into Google Search or Photos, use Facebook, or shop on Amazon, we are encountering machine learning. So this time is different.
 
While technology innovations in cognitive, visual, NLP, neural networks, deep learning etc. happened independently in diverse fields, it has all come together neatly to give AI the biggest boost in decades. Companies like IBM and Google DeepMind, Microsoft, and Amazon have made breakthroughs that were once unthinkable. An amazing phenomenon is sparking rapid incremental innovation; in the past, companies invented alone. In the current, not only are innovative companies investing heavily to change the game, but once they do, they are putting their algorithms on open source platforms to enable brains across the world to develop it further. 
 
This democratization of algorithms and technology, along with crowdsourcing of brainpower and the downward spiral of computing costs has made this time different – it’s a geometric progression rather than an arithmetic one. 
 
All the above is reality, not an illusion and we have entered a brave new world that is going to change the way we do business forever. In the next blog, we will explore what this means for the enterprise and what steps can be taken to embrace this change. 
 

About the author

Divya Kumar
7 Comments Leave a comment
Nice article. I have a question for the author. The current areas using AI/machine learning are online shopping and searches, production planning etc - where the cost of failure may not be critical. How long, before we allow AI in critical areas like healthcare, where based on historical data on people, medical science, disease patterns etc a machine asks the user/RMP/nurse to enter elementary health data and recommends diagnostic tests and possible diagnosis?
divykuma's picture
Venkatesh, thanks for your comment. AI has already entered healthcare though in a small way. Regulation and criticality makes this more unique than other sectors. There are hospitals in the US that have already implemented AI such that when symptoms are fed in, diagnosis is made and options given along with a level of probability. Also, based on individual body indices, certain medications are recommended. IBM Watson has already made good inroads in the field of radiology and oncology and other startups are solving unique healthcare problems. Currently and as I foresee in the near future, AI will continue to be implemented more widely as an aid to doctors rather than standalone. In 5 years though, the healthcare industry could look very different!
Liked the analogy of geometric progression. Stretching the argument further, there will be days in near future where IT education will have greater share of finance and a finance enabled CTO will abolish the role of CFO.
Divya, Thoroughly enjoyed the writeup. ML has brought together Cognitive computing and AI partly and has tremendous potential. It has been ring fenced today in the context of value for business but the possibilities beyond that are also tremendous. Particular in National security and defence, public sector, healthcare and education to name a few. How are governments using this capability or it still only within access for Enterprises. How is this being used to accelerate quality of life of poor and emerging nations and to optimise those in wealthy nations.
divykuma's picture
Vinod, thanks for your comment and glad you enjoyed the article! There is a lot of good work being done in domains such as defense and public services but this is in pockets. Palantir was an early mover for analytics in this space. Now there are companies such as Compology (for waste management), and AbsolutData (for lawsuit recommendations) which are using ML to tackle niche issues. DARPA has also put in a lot of R&D funding into ML for defense. Most of the current applications are however in traffic management, parking etc. While these are being used for optimization in wealthy nations, I see no real adoption in the near future in emerging nations. One of the main reasons being that public data itself is scratchy in these countries, and ML depends a lot on past data. I do agree with you though that ML has the potential to accelerate quality of life in these countries!
Divya Madam,Article is nice but not clarifying about Any use of Nano Technology in coming era Ok Thanks Anand Sanglikar
divykuma's picture
Anand, thanks for your comment! Nano Technology is a full area by itself. While there is some co-mingling of AI and Nano Technology, this seems to be more in hardware than business application.

Leave a comment

Your email address will not be published. Required fields are marked *.