We are on the cusp of truly remarkable changes in the way we think of healthcare and how we deliver healthcare. Artificial Intelligence will be a key enabler for both the transformation and the disruption of the healthcare ecosystem.
Artificial Intelligence (AI) is a hot topic. The technology is emerging from its more traditional academic/back-office orientation and is becoming more mainstream. Many of the leading publications such as the Economist, the Financial Times, the Wall Street Journal, the New York Times, and the BBC are publishing AI-related content on a more frequent basis. World governments and leaders are now commenting on the technology—the Chinese government announced a plan to invest billions in the technology, with the goal of moving China to the forefront of AI by 2025. Vladimir Putin recently stated that whoever masters AI will rule the world. Elon Musk believes unregulated AI is a threat to the world. Others, such as Gary Kasparov, have a more positive view of AI’s potential contribution to the world.
The purpose of this blog is to share knowledge and engage in discussion with regards to the application of Artificial Intelligence (AI) across the healthcare ecosystem. The first several posts are intended to help establish a baseline understanding of AI. Over time, the focus will shift to topics such as: AI as an enabler for industry transformation; industry incumbents partnering with or acquiring technology assets; new firms and/or new technology entering the market; emerging use cases such as how AI may affect drug discovery and development; and the general evolution and maturation of the technology. The blog will remain relevant to healthcare ecosystem oriented topics.
This blog will focus more on the business application of the technology and not so much on the technology itself. Articles on artificial intelligence and machine learning are often times heavily weighted on math and technology. This is understandable given the subject matter. There will be some technical discussion, but these will be limited in scope and for the purpose of furthering high-level understanding. Links will be provided for those who seek additional, more detailed understanding of the more technical aspects.
I am doing this because I am fascinated by the potential of AI across the healthcare ecosystem. We are on the cusp of truly remarkable changes in the way we think of healthcare and how we deliver healthcare. The next couple decades will be really freaking cool.
AI is emerging from its traditional roots primarily in academia and is becoming a mainstream business tool. Historically, AI was the focus of the more technically astute people in academia and/or the financial services community. Wall Street was an early adopter of the technology. Quants have long been valued for their ability to write complex trading algorithms. Approximately 50% of all US equity trading is executed via high frequency trading (algorithms)1.
The technology is now beginning to mature and proliferate across a much wider cross-section of the economy. Organizations that leverage AI will most likely find themselves with a competitive advantage relative to those who fail to understand and leverage the technology. Industry disruption is happening at a more rapid pace. Those that fall behind may find it difficult to close the competitive gap.
A brief history of Artificial Intelligence (AI)
The ideas associated with AI are not new. The concept of non-human objects being programmed to mimic human-like capabilities has been around since the Greeks. Homer wrote of mechanical assistants waiting on the gods at dinner.2
The more modern concept of AI dates to the 1950s. In 1950, Alan Turing proposed what has become known as the Turing test—can a computer communicate well-enough with a human to convince the human that it (the computer), too, is human.
The term artificial intelligence was coined in 1956 at a conference at Dartmouth College. The mid-1950s ushered an era of optimism. Many of that era’s leading scientific minds attended the Dartmouth conference and contributed to the early advancement of the technology.2 Despite the early optimism, achieving artificially intelligent systems proved to be a challenge. Waves of enthusiasm were followed by troughs of disillusionment throughout the 1950s, 60s, 70s, and 80s.
Interest in AI began to pick-up again in the late-1990s when IBM’s Deep Blue defeated the Russian Chess Grandmaster Gary Kasparov. Kasparov detailed this experience in his recently released book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins.” This is a good book and will be added to the recommended reading list. Kasparov believes AI will have a positive impact on society.
In 2011, IBM once again demonstrated the potential of AI when its Watson system won the quiz show Jeopardy. Watson’s success on Jeopardy coupled with a successful marketing campaign is helping to expose the capabilities of AI to a much wider audience. The technology is emerging from its traditional academic orientation and is becoming more accepted by the mainstream.
How do we define Artificial Intelligence (AI) and why the recent resurgence?
There is no single absolute definition of AI. For the purpose of this blog, AI is defined as: the capability of a machine (non-human) to replicate intelligent human behavior and human decision-making capabilities. AI should have the ability to perform as well or better than a human when performing a task.
Recent advances in technology and access to large amounts of data are enabling the resurgence of AI. Hardware and software are becoming ever-increasingly more powerful, less expensive, and easier to access. This enables the processing of large data sets quickly and cost effectively. The amount of data we produce doubles every year. As much data was produced in 2016 as was produced from the beginning of time through 2015. Data is instrumental in helping AI systems learn. The more information available for processing, the more the AI system can learn, and the more accurate it becomes. AI is beginning to mature to the point where it can learn without human interaction. For example, Google’s Deep Mind taught itself how to play and win Atari games.4
What is Machine Learning?
Artificial intelligence (AI) consists on numerous subfields including natural language processing (NLP), reasoning and knowledge representation, perception, and machine learning. Machine learning is one of the more important components of artificial intelligence. It is being used to enhance our everyday experiences via artificially intelligent machines and interfaces. Amazon’s Echo, Apple’s Siri, and Google’s Assistant are a couple of the more well-known products that leverage machine learning.
Machine learning can be applied to a variety of situations; however, it is often used to predict behavior. Credit scoring is a well-known application of machine learning. When someone applies for a loan, a credit card, or a mortgage, the applicant is generally asked a series of questions. This information is combined with input from the applicant’s credit history and fed into a predictive model. This model generates the credit score.
Target marketing is another frequent application of machine learning. Marketing departments will leverage insights based on a series of attributes such as: age, web-browsing history, income, purchase history, location, etc. to predict if the person may be interested in a product or not. This prediction can be used to decide whether or not to extend a promotional offer. Likewise, target marketing can be used to determine how much a person may be willing to pay for a particular product or service. Personalized pricing strategies can be implemented via this insight.
Machine learning has numerous use cases across the healthcare ecosystem. For example, the technology can be applied in preventative health programs. Machine learning can be used to assess a person’s –omic (genome, proteome, metabolome, microbiome) data along with other data sources such as the person’s electronic medical record to predict the likelihood of developing diseases such as diabetes or heart disease. Individuals who demonstrate a high propensity for the disease can be addressed with proactive intervention—e.g., the implementation of lifestyle changes or the prescription of preventative therapies.
Thank you for taking the time to read the first of what will be many posts on this topic. I hope you found the content informative. Future instalments will include topics such as—why use artificial intelligence and machine learning; an overview of the technology and models; an overview of the leading AI companies and what they are working on; ethical considerations; how to get started with AI, etc. Please reach out to me at any time if you have any questions, comments, or would like to participate in future posts.
- Chaparro, Frank. “CREDIT SUISSE: Here’s how high-frequency trading has changed the stock market” Business Insider. March 20, 2017.
- Buchanan, Bruce. “A (Very) Brief History of Artificial Intelligence” AI Magazine Volume 26 Number 4. 2006.
- Moor, James. “The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years” AI Magazine Volume 27 Number 2006.
- Helbing, Dirk; Frey, Bruno; Gigerenzer, Gerd; Hafen, Ernst; Hagner, Michael; Hofstetter, Yvonne; van den Hoven, Jeroen; Zicari, Roberto; Zwitter, Andrej. “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific American. February 2017.