Some thoughts on emergence, ants and the unintended consequences of AI and ML
Perhaps the biggest risks from AI and ML are the unintended consequences of large numbers of...
It is sometimes said that a butterfly flapping its wings in a Brazilian rainforest can trigger an earthquake. However, in reality nature is a great example of a loosely coupled system, enabling different species to emerge in different parts of the world completely independent of other “worlds.”
Some thoughts on emergence, ants, and the unintended consequences of AI and ML
Emergence models the formation of new patterns and macro behaviors, created bottom up from large numbers of small independent entities. The behavior of ants is a great example. I have been of the opinion that the biggest risk from AI and ML is the unintended consequences of large numbers of devices interacting in ways we couldn’t predict. Emergence models give us some view of how this might happen and perhaps nature also has some ways of preventing undesirable consequences.
Emergence, ants, and cities
I listened to a radio program on Radio 4 last Sunday and heard a fascinating discussion on emergence models and how they can explain the way complex systems operate, when no one is directing or in control. Ants are a great example, every ant being an autonomous object with very low cognitive powers. They operate randomly testing things, searching for food and supporting procreation and survival of the colony. However, the way they operate is that once one ant bumps into something interesting it lays down a marker (ephemerons) and these markers draw other ants to the opportunity. This process repeats to good effect. This led to a discussion of the mathematically proven power of crowds and their ability to come up with a very accurate answer as a mean of all possibilities. Stephen Scott Johnson has developed this into a series of models and has a book published on the subject Emergence: The Connected Lives of Ants, Brains, Cities and Software. This book was published in 2001 so it’s worth now revisiting these ideas in the age of Artificial Intelligence (AI) and Machine Learning (ML).
The need for (and risk of) speed
While the concepts and mathematics of AI and ML have been around for many years, the big change over the last 10 years and indeed the last 2–3 years has been the exponential growth in processing power (driven by Moore’s Law) and in high-speed network connectivity. This has led to the rapid growth in devices and systems capable of responding in near real time to events or information. This availability of devices that one can interact with is further fuelling the growth or systems that address specific needs based on limited datasets and relatively straightforward ML algorithms. Bots is a good example, where systems with a limited use case are deployed to address specific customer services needs or resolve specific bottlenecks.
The downside is that we are already seeing unintended consequences as devices or programs interact faster than the control systems can manage. So-called “flash crashes” in markets due to mass algorithmic trading is a good example.
The world as a brain
So, if these devices and systems are proliferating, can we see something of scale beginning to take control? If we think of devices as individual neurons we can compare with the number that exist in different animals:
|Name||Neurons in the brain/whole nervous system (millions)|
Estimates of the number of devices spread across the world vary but Gartner puts it at around 8 billion this year and other models put it as growing to about 75 billion by 2025. So, on these calculations the world will have as many connected devices as a human being has neurons within the next 10 years. Now neurons in a human brain fire 5–50 times a second and most devices are currently not communicating at this sort of level. However, given the exponential growth of processing power and the speed of networks it wouldn’t seem unreasonable to expect the average device to start to operate within this range within the next 10 years.
So the Chinese celebration of animals for each year maybe prescient as the level of communications increases it is worth considering what will emerge.
A butterfly flaps its wings
So, if we want to stop something emerging we didn’t intend what should we do about it? It is sometimes said that a butterfly flapping its wings in a Brazilian rainforest can trigger an earthquake. However, in reality nature is a great example of a loosely coupled system, enabling different species to emerge in different parts of the world completely independent of other “worlds.” There is thus a level of isolation of each ecosystem to the other through the balance of time, space, and rate of change and the damping effects of friction and other natural impediments to movement.
The danger we have is that all of our efforts in the internet world are aimed at removing impediments to communication of information and consequently we are actively removing any dampers. This headlong charge means that we are bringing forth the unintended consequences of connectivity at pace without any idea what will emerge.
So how does nature deal with things when they run out of control, such as swarms or overpopulation? There are maybe two extreme models to think of; either the individual entity autonomously deals with it itself (lemmings), or one population wipes out another.
So what does all this mean to us as individuals and as the human race. Perhaps we have a number of options to choose from if we want to avoid unintended consequences from the proliferation of AI/ML-enabled IoT devices:
- Assume it’s not going to happen
2. Recognize the possibility of emergence at a local or global level but ignore it and hope for the best—its someone else’s problem
3. Try to build natural dampers into the systems to segment the problem
4. Build some self-destruct features into devices to ensure self-regulation or fail-safe.
Oh dear, quite a difficult problem—one to mull on.
Alexa, hack my business model. With AI.
Artificial intelligence has been put to good use for applications with low complexity,...
“What would Amazon do?” If you happen to be caught in an innovation workshop, out of inspiration and at a loss for words, here’s the simple line that may break the inertia. We introduced this mantra years ago as part of our TechnoVision trend series and have been using it ever since with remarkable success.
And, it works for Artificial Intelligence as well; a topic every business and IT leader is fascinated by, and not just occasionally, followed by silence and procrastination, but regularly, because it turns out to be difficult to articulate next steps and tangible action.
Clearly, there are plenty of examples of AI applications that have a low complexity and deliver real benefits. Take a look at our recent report Turning AI into Concrete Value, that highlights dozens of these.
Still, having a look at how Amazon deals with the topic is quite instructional in itself. If you want to understand what an “AI-first” enterprise consists of, look no further: Amazon has convincingly infused literally all aspects of its business with AI.
Its recommendation engine becomes more and more spot-on, to the point that it will be able to identify products and services that you really, desperately, want before you know it yourself (psychic pizza, anyone?). Its warehouses are manned by autonomous, AI-driven robots. Its delivery drones completely rely on AI too. Amazon Alexa’s AI-based conversational system is getting better and better at understanding speech, and it does a pretty convincing job at generating it as well.
The Amazon Web Services cloud is built on a brilliantly designed hardware and software infrastructure, including AI that optimizes its performance and prevents it as much as possible from malfunctioning and breaches. Oh, and being the entrepreneurial retailer that they are, Amazon will also sell you all of their AI technology to use for your own purposes.
All of this AI goodness comes fluently together in the Amazon Go store, where different AI applications are used to the full extent to enable a literally frictionless shopping experience, without a check-out or anything else that might annoy you in getting what you want.
In retrospective, it’s fascinating to see how visionary Amazon has been with its Mechanical Turk “artificial” intelligence. Launched more than 12 years ago when the industry was still recovering from yet another AI winter, it already published a catalogue of web services to match supply and demand for “human intelligence” services. Hidden behind the API is a global crowdsourced community of real people, picking up and delivering Human Intelligence Tasks (HITs). In 2005, many of these HITs—such as image recognition, audio and video analysis, sentiment detection, and natural language understanding—indeed could not be effectively delivered by technology. Nowadays, AI—with its powerful smart automation and cognitive capabilities—can routinely deal with it. Lots of low-hanging fruit for the picking.
Makes you wonder what’s next though. With rapid advances in deep learning and reinforcement learning, AI will go way beyond what we would consider HITs. It will be able to combine training data from a variety of sources and in volumes and frequencies that humans simply are not able to absorb; not even with help from statistics, logic, or algorithms.
It’s this exotic, out-of-this-world potential of AI that will provide the material for new products, services, processes, and even hacked business models that we deemed impossible before (sure, let’s use that word “disruptive” one more time—for old times’ sake).
So, AI comes in many different flavors—from automating simple human tasks, to augmenting humans in their work with cognitive capabilities, all the way to exploring the unknown and enabling the unthinkable. No need for awkward silences in your innovation discussions: inspiration is all around and we can learn from the best.
Alexa, hold that thought.
The Thrills and spills of working in intelligent automation
Finance & accounting has adjusted to AI. But it’s not just the reputation of AI that...
Right now, we’re looking for people with a tech consulting background not just here in Germany but in France, the Nordic countries, across the rest of Europe and in the US too.
Do you want to be in the best possible place and the best group of people to see where automation is taking us? There’s never been a more exciting time to bring the benefits of business transformation to finance and accounting. What are you waiting for? Jump aboard!
I once saw a cartoon showing a crowded arena in ancient Rome, with a charioteer in the foreground saying to his friends: “Don’t bother me with those salespeople right now. We have a race to win!” Off to one side stood the sales team—and behind them, just visible through an archway, was the sports car they were selling.
It’s often the case, isn’t it? Someone tries to sell you something, and your first instinct is to assume it’s irrelevant, expensive and a waste of time. We’ve grown to inherently distrust salespeople. We don’t even want to listen to them—they’re a distraction from the here and now.
But there are two things that can make a big difference to people’s attitudes in these circumstances—reputation and track record. If the product or service is positively regarded by a wide audience it’s more likely to be favorably received—and if the team selling it can demonstrate relevant, recent and significant success, so much the better.
I’ve been lucky to find myself in just this kind of environment. I’m Director of Technology Transformation specializing in artificial intelligence (AI) and robotic process automation (RPA) for Capgemini in Düsseldorf. I lead a team that spends much of its time demonstrating the benefits of these technologies for existing clients who are seeing the need for higher automation, and also for organizations that are completely new to us and might only have a first impression.
Do we ever meet people like our Roman charioteer? Not really. Most enterprises these days are familiar with AI and RPA, and the benefits they can bring to finance and accounting (F&A). While we’re occasionally asked to run proof-of-concept (PoC) exercises within a designated functional area of a business, we mostly roll out test implementations that are more deeply and widely embedded.
But it’s not just about the technology—it involves making use of a full methodology that’s tailored to specific business needs, and if that means taking a fresh look at how processes are implemented and can be improved or supported to get the best out of the transformation, most organizations are happy to go along with it. They’re not exactly driving chariots, but they do know how much better it would be to have a sports car.
It’s not just the reputation of AI that gets people interested, nor the cognitive functions that can be replicated at scale in F&A. It’s also the reputation of the business delivering it—a reputation built on a track record. The clients we meet really like to hear about our experiences—they want us to demonstrate the possibilities with real-world examples and want to know that there’s a sound business philosophy behind our work delivering these substantial and practical benefits.
Our track record is founded on the work and knowledge of our individual team members. I really like working with the many skilled people we have across the globe, and it’s great to be able to show the world what we can do as a team. And as our experience grows and the technology evolves, we’re able to enhance the extent of our offer. This is a real buzz.
Fancy getting on board?
When I first started embedding automation into processes, there were just three of us in our BPO IT team—but now there are loads of us, not just in Germany, but all over the world, helping to win deals. I’m really proud of the teams we’ve built—and we need even more people on board.
Right now, we’re looking for people with a tech consulting background not just here in Germany but in France, the Nordic countries, across the rest of Europe, and in the US too. Ideally, you should be able to speak the local language.
What can you expect? From my own experience, you’ll find you’re in the best possible place to see where the technology is taking us—and among the best group of people too. We enjoy what we do, we love the difference we can make to the enterprises we serve, and the technologies we’re engaged in are just the current stage on a long automation journey.
I’m excited to be part of it—and if the sound of it appeals to you too, you’ll find there’s room in our sports car. Why don’t you climb aboard? Click here for more information about working with or for Capgemini.
Mrs. Sprat, the AI. Is it time for us to fatten up our lean thinking?
What is “lean thinking” and is it what we really want? Some of us have accepted a lower...
“Jack Sprat could eat no fat. His wife could eat no lean.” Or so the English nursery rhyme goes. What can this allegorical tale from the seventeenth century teach us about business today, specifically in terms of how we design processes?
Lean Thinking is about ensuring people work in the most efficient way. It focuses on achieving targeted outcomes and cutting out waste in the process. It’s based on the principle that humans have limited capacity to get tasks done (and we are an expensive resource), so we need to streamline how we work.
A taste for more
I think we’ve become so accustomed to this stripped-back diet that maybe we’ve forgotten some of the good things we used to enjoy. We’ve accepted that a lower level of service and a less personal experience is just the modern way. By making our processes lean, some of our outcomes have lacked substance too.
But as we start to design processes for robots, rather than humans, perhaps some of those past indulgences can be brought back on the menu. Because robots can do more things, far quicker, and at a fraction of the cost, there could be an opportunity to add more steps into our operations. We could return to the days of richer processes that leave customers more fulfilled, while still adding value to overall business outcomes.
Adopting a balanced diet
As Intelligent Automation grows within the workplace, we’ll start to see a mix of processes emerge: lean processes for humans, more complex processes done by robots, and some that are designed around both. This will allow organizations to invest in new opportunities and explore new business avenues that were previously off-limits due to capacity and resource constraints.
For example, in the IT space, we recently looked at new approaches to cybersecurity that are helping to slow out-of-control spending on threat prevention. This is an area that can’t afford to be lean and where corners can’t be cut—and Intelligent Automation is addressing the unsustainability of human-led strategies. When the expertise of security professionals is combined with intelligent systems that can process thousands of threats in real time, businesses will have more confidence in their security strategy and consider new IoT deployments or mobile investments in a new light.
Feeding it across the business
It will be interesting to see if this change in IT process will also start to happen in traditional business functions.
Take the example of customer collections. It’s quite expensive for humans to monitor customers and make informed collection decisions. The process often relies on paid-for insights from credit agencies—and the level of bad debt that it typically identifies isn’t always worth the outlay.
But with the wealth of data available online and through social media, it’s now possible to analyze risk in-house using robots—giving businesses a quick and affordable way to make sensible judgements on where to focus collection. This could include a split-testing approach, whereby you adopt a tailored strategy based on audience segmentation. You might even drill down to an individual level, analyzing personal data on employment, health and purchase history to estimate the effort required.
Using Intelligent Automation as a complement to established lean processes will very likely change some of the fundamental ways we do business. I think it will help us re-orient our “cost-out” mentality to one that’s more about “value-in.” If robots offer the capacity for us to do more at less cost, then let’s embrace that opportunity. If Mrs Sprat can bring more to the table, then let’s welcome her to the party.
Quality data, a must have for AI
To fully exploit the possibilities of AI and the promises of truthful predictions and advice,...
With the advent of Artificial Intelligence (AI) we are able to analyse data in more depth than ever before.
AI techniques like Machine Learning (ML) can unravel deeper insights from sets of data than traditional statistical techniques. Big Data both requires and enables these new methods. We can now access large amounts of data through mass storage and high performance computing.
The first two V’s of Big Data—Volume and Variety—have to be met in order to get Machine Learning working. For instance: with large amounts of data regarding visits to a web shop, you can classify and predict how particular types of visitors will use the website. This way you can create product recommendations for our visitors, even for first timers.
When using a relatively simple ML technique like decision trees, every decision node needs at least ten occurrences in training and test data. With ten-thousands of decision nodes, this can easily require millions of data records to cover the complete model. Collecting these vast quantities of data can be challenging.
Garbage In, Garbage Out
But to fully exploit the possibilities of Artificial Intelligence and the promises of truthful and correct predictions and advice, we also need data with the right quality. The old saying about computing is still valid: “Garbage In, Garbage Out.” But why is this becoming more problematic with AI and Machine Learning? With traditional data analysis, when bad data is discovered our set, one can exclude it and start over. This is cumbersome but manageable.
“Bad data consists of missing data, outliers, skewed value distributions, redundancy of information, and features not well explicated.” (John Paul Mueller, Luca Massaron)
But such data cleaning cannot be done at scale. With Big Data and Machine Learning, bad data cannot be detected or pulled out of the system that easily. Artificial Intelligence techniques draw conclusions from large masses of data, which may or may not include garbage data. At a certain moment in time, it becomes impossible to determine on which data elements these predictions are based. In this way, Artificial Intelligence becomes black box technology: you don’t know where it’s drawing its conclusions from. “Unlearning” something is nearly impossible—remove one part and the entire model ceases to work. Just like a brain! When bad data is detected, you’re mostly required to restart the whole learning process from the beginning, which is time and cost-intensive.
Questions to be asked
So how do we establish whether our data is of good quality? This is called Veracity, another V of Big Data. Veracity refers to the trustworthiness of the data. Without digging too deep into the subject, there’re some basic questions we can ask about the data we’re going to use.
Why & Who: Data from a reputable source typically implies better accuracy than a random online poll. Data is sometimes collected, or even fabricated, to serve an agenda. We should establish the credibility of the data source, and for what purpose it was collected. Dare to ask if the data biased because it was to prove a political, business, ethnical or ideological point-of-view.
Where: Almost all data is geographically or culturally biased. Consumer data collected in the United States may not be representative of consumers in Asia. And the cultural differences within Asia are also huge. When we objectively measure data, like temperatures, the interpretation of that data can differ: what is classified as cold or warm? And of course, temperature readings from Paris are not very useful for weather predictions in Mumbai.
When: Validity is also one of the V’s of Big Data. Most data is linked to time in some way in that it might be a time series, or it’s a snapshot from a specific period. Out-of-date data should be omitted. But when using an AI in a longer time span, data can become old or obsolete during the process. “Machine unlearning” will be needed to get rid of data that is no longer valid.
How: It’s worth getting to know the gist of how the data of interest was collected. Domain knowledge is of the essence here. For instance, when collecting consumer data, we can fall back on the decades old methods of market research. Answers on an ill-constructed questionnaire will certainly render poor quality data.
What: Ultimately, you want to know what your data is about, but before you can do that, you should know what surrounds the numbers. Sometimes humans can detect bad data or outliers, because it’s looks illogical. We should investigate where strange looking data comes from. But Artificial Intelligence doesn’t have this form of common sense; it tends to take all data for true.
Taking care of your data
In order to answer the question in an orderly manner, you need to organize the research into the quality of the data. Data quality procedures should of course be in place and used. But more is to be done. Establishing the veracity of data is part of the process of data (and content) curation.
“Curation is the end-to-end process of creating good data through the identification and formation of resources with long-term value. (…) The goal of data curation in the enterprise is twofold: to ensure compliance and that data can be retrieved for future research or reuse.” (Mary Ann Richardson)
I strongly believe data curation should be expanded beyond the description above. Like a curator in a museum who establishes if an exhibit is genuine or fake, a data curator should do the same for his data. This not only requires data analytics skills, but also domain expertise about the subject from which the data stems.
When you want to use data for Machine Learning and Artificial Intelligence, you’ve to go beyond the standard criteria of data quality. Yes, these criteria—like availability, usability and reliability—are still valid. But we should also take veracity into account: is the data truthful. And you need methods and roles to establish the truthfulness of your data.
Special thank to my colleague Marijn Markus for his valuable input.
Artificial intelligence and the healthcare ecosystem
Artificial Intelligence will be a key enabler for both the transformation and the disruption...
We are on the cusp of truly remarkable changes in the way we think of healthcare and how we deliver healthcare. Artificial Intelligence will be a key enabler for both the transformation and the disruption of the healthcare ecosystem.
Artificial Intelligence (AI) is a hot topic. The technology is emerging from its more traditional academic/back-office orientation and is becoming more mainstream. Many of the leading publications such as the Economist, the Financial Times, the Wall Street Journal, the New York Times, and the BBC are publishing AI-related content on a more frequent basis. World governments and leaders are now commenting on the technology—the Chinese government announced a plan to invest billions in the technology, with the goal of moving China to the forefront of AI by 2025. Vladimir Putin recently stated that whoever masters AI will rule the world. Elon Musk believes unregulated AI is a threat to the world. Others, such as Gary Kasparov, have a more positive view of AI’s potential contribution to the world.
The purpose of this blog is to share knowledge and engage in discussion with regards to the application of Artificial Intelligence (AI) across the healthcare ecosystem. The first several posts are intended to help establish a baseline understanding of AI. Over time, the focus will shift to topics such as: AI as an enabler for industry transformation; industry incumbents partnering with or acquiring technology assets; new firms and/or new technology entering the market; emerging use cases such as how AI may affect drug discovery and development; and the general evolution and maturation of the technology. The blog will remain relevant to healthcare ecosystem oriented topics.
This blog will focus more on the business application of the technology and not so much on the technology itself. Articles on artificial intelligence and machine learning are often times heavily weighted on math and technology. This is understandable given the subject matter. There will be some technical discussion, but these will be limited in scope and for the purpose of furthering high-level understanding. Links will be provided for those who seek additional, more detailed understanding of the more technical aspects.
I am doing this because I am fascinated by the potential of AI across the healthcare ecosystem. We are on the cusp of truly remarkable changes in the way we think of healthcare and how we deliver healthcare. The next couple decades will be really freaking cool.
AI is emerging from its traditional roots primarily in academia and is becoming a mainstream business tool. Historically, AI was the focus of the more technically astute people in academia and/or the financial services community. Wall Street was an early adopter of the technology. Quants have long been valued for their ability to write complex trading algorithms. Approximately 50% of all US equity trading is executed via high frequency trading (algorithms)1.
The technology is now beginning to mature and proliferate across a much wider cross-section of the economy. Organizations that leverage AI will most likely find themselves with a competitive advantage relative to those who fail to understand and leverage the technology. Industry disruption is happening at a more rapid pace. Those that fall behind may find it difficult to close the competitive gap.
A brief history of Artificial Intelligence (AI)
The ideas associated with AI are not new. The concept of non-human objects being programmed to mimic human-like capabilities has been around since the Greeks. Homer wrote of mechanical assistants waiting on the gods at dinner.2
The more modern concept of AI dates to the 1950s. In 1950, Alan Turing proposed what has become known as the Turing test—can a computer communicate well-enough with a human to convince the human that it (the computer), too, is human.
The term artificial intelligence was coined in 1956 at a conference at Dartmouth College. The mid-1950s ushered an era of optimism. Many of that era’s leading scientific minds attended the Dartmouth conference and contributed to the early advancement of the technology.2 Despite the early optimism, achieving artificially intelligent systems proved to be a challenge. Waves of enthusiasm were followed by troughs of disillusionment throughout the 1950s, 60s, 70s, and 80s.
Interest in AI began to pick-up again in the late-1990s when IBM’s Deep Blue defeated the Russian Chess Grandmaster Gary Kasparov. Kasparov detailed this experience in his recently released book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins.” This is a good book and will be added to the recommended reading list. Kasparov believes AI will have a positive impact on society.
In 2011, IBM once again demonstrated the potential of AI when its Watson system won the quiz show Jeopardy. Watson’s success on Jeopardy coupled with a successful marketing campaign is helping to expose the capabilities of AI to a much wider audience. The technology is emerging from its traditional academic orientation and is becoming more accepted by the mainstream.
How do we define Artificial Intelligence (AI) and why the recent resurgence?
There is no single absolute definition of AI. For the purpose of this blog, AI is defined as: the capability of a machine (non-human) to replicate intelligent human behavior and human decision-making capabilities. AI should have the ability to perform as well or better than a human when performing a task.
Recent advances in technology and access to large amounts of data are enabling the resurgence of AI. Hardware and software are becoming ever-increasingly more powerful, less expensive, and easier to access. This enables the processing of large data sets quickly and cost effectively. The amount of data we produce doubles every year. As much data was produced in 2016 as was produced from the beginning of time through 2015. Data is instrumental in helping AI systems learn. The more information available for processing, the more the AI system can learn, and the more accurate it becomes. AI is beginning to mature to the point where it can learn without human interaction. For example, Google’s Deep Mind taught itself how to play and win Atari games.4
What is Machine Learning?
Artificial intelligence (AI) consists on numerous subfields including natural language processing (NLP), reasoning and knowledge representation, perception, and machine learning. Machine learning is one of the more important components of artificial intelligence. It is being used to enhance our everyday experiences via artificially intelligent machines and interfaces. Amazon’s Echo, Apple’s Siri, and Google’s Assistant are a couple of the more well-known products that leverage machine learning.
Machine learning can be applied to a variety of situations; however, it is often used to predict behavior. Credit scoring is a well-known application of machine learning. When someone applies for a loan, a credit card, or a mortgage, the applicant is generally asked a series of questions. This information is combined with input from the applicant’s credit history and fed into a predictive model. This model generates the credit score.
Target marketing is another frequent application of machine learning. Marketing departments will leverage insights based on a series of attributes such as: age, web-browsing history, income, purchase history, location, etc. to predict if the person may be interested in a product or not. This prediction can be used to decide whether or not to extend a promotional offer. Likewise, target marketing can be used to determine how much a person may be willing to pay for a particular product or service. Personalized pricing strategies can be implemented via this insight.
Machine learning has numerous use cases across the healthcare ecosystem. For example, the technology can be applied in preventative health programs. Machine learning can be used to assess a person’s –omic (genome, proteome, metabolome, microbiome) data along with other data sources such as the person’s electronic medical record to predict the likelihood of developing diseases such as diabetes or heart disease. Individuals who demonstrate a high propensity for the disease can be addressed with proactive intervention—e.g., the implementation of lifestyle changes or the prescription of preventative therapies.
Thank you for taking the time to read the first of what will be many posts on this topic. I hope you found the content informative. Future instalments will include topics such as—why use artificial intelligence and machine learning; an overview of the technology and models; an overview of the leading AI companies and what they are working on; ethical considerations; how to get started with AI, etc. Please reach out to me at any time if you have any questions, comments, or would like to participate in future posts.
- Chaparro, Frank. “CREDIT SUISSE: Here’s how high-frequency trading has changed the stock market” Business Insider. March 20, 2017.
- Buchanan, Bruce. “A (Very) Brief History of Artificial Intelligence” AI Magazine Volume 26 Number 4. 2006.
- Moor, James. “The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years” AI Magazine Volume 27 Number 4. 2006.
- Helbing, Dirk; Frey, Bruno; Gigerenzer, Gerd; Hafen, Ernst; Hagner, Michael; Hofstetter, Yvonne; van den Hoven, Jeroen; Zicari, Roberto; Zwitter, Andrej. “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific American. February 2017.
Novartis’s new chief sets sights on “productivity revolution”
Within a decade, Artificial Intelligence in R&D will become the norm. The leadership of...
Within the next decade, Artificial Intelligence (AI) in R&D will become the norm so the leadership of Life Sciences companies need to start framing the future.
Novartis recently announced the appointment of Dr. Vas Narasimhan as the new CEO. He assumes the CEO responsibilities effective February 2018.
His appointment is a recognition that the industry is entering a period of transition and Novartis is readying itself for the future. We are seeing the emergence of the next generation of industry leadership who are more well-suited to address the forthcoming technology-based disruption. Dr. Narasimhan views the future of Novartis as a “…medicines and data science company—centered on innovation and access…”
There are a couple things that are notable about the appointment of Dr. Narasimhan. The first is his background. He comes to the role with a strong background in medical science. In his current role, he is the global head of drug development for Novartis. This is in contrast to his predecessor who has a commercial background.
The second notable aspect of his appointment is his stated intent to make data science a core capability of the firm’s business. Dr. Narasimhan is looking to data science to enable “…productivity revolution…” at the Swiss drugmaker. While the overall numbers and time duration are subject to some debate, the drug discovery and development process has historically been a long expensive process with a high rate of failure. Anything that can be done to shorten time it takes to bring life-saving therapies to market is a benefit to society.
Within the next decade, Artificial Intelligence (AI) in R&D will become the norm. The leadership of the Life Sciences companies need to start framing the future—what will the world look like in a decade and what do we need to do to prepare? How will AI impact the industry? If machines are able to read everything, then what does this mean for the R&D community? How will a firm like Novartis capitalize on these new capabilities to fuel innovation? If leadership fails to grasp the significance of these changes and set the appropriate strategy, the risk of disintermediation is significant. Google, and others, are hard at work in this space.
The following statement is attributable to Dr. Bertalan Mesko. I modified a quote of his and made it applicable to the application of AI in drug discovery and development—Artificial Intelligence will not replace the clinical researchers; however, the pharmaceutical companies that apply Artificial Intelligence to augment the clinical researchers (and across the broader ecosystem) will replace those who don’t.