The first blog in this series provided an overview of Artificial Intelligence (AI). The intent of this blog is to provide insight as to why we want to use Artificial Intelligence (AI).

AI is increasingly being used to augment human capabilities and automate repeatable processes. This is because AI has the following attributes:

Cost effectiveness—once fully developed and trained, the AI is often much less cost-prohibitive than employing the number of humans needed to do the same task. The AI can process millions of records in near real-time and the marginal cost of additional processing may approximate zero. Future blog posts (plural) will discuss the social cost in greater depth. Suffice to say that we need to start planning for the inevitable economic dislocation that will happen as the technology matures. It simply isn’t feasible to have large portions of the work-eligible population devoid of employment. That model does not work.

Ability to rapidly process large data sets—AI can be used to support near real-time collection and processing of data. Data from wearables, fitness trackers, and other Internet-of-Medical-Things (IoMT) devices can be collected to provide longitudinal data on a person’s health. In the not too distant future, we will be able to ask our AI-enabled digital assistance for an update on our health and recommendations on diet, lifestyle, and therapy adherence protocols. Enormous amounts of data will be collected and processed. Each of us will become our own data center.

Repeatability and reliability—once the AI has been configured, tested, and refined; it will produce repeatable and reliable results. Human conditions such as fatigue, stress, and emotions do not factor into the algorithm’s performance. AI will be used to help alleviate the looming doctor shortage and improve quality of care. Radiologists are early adopters of the technology. AI can rigorously analyze the ever increasingly complex medical imaging, provide initial recommendations, and help reduce human errors.1

Non-biased analysis—in an ideal world, the AI is free of bias. Some will argue that AI, unlike humans, does not display deliberate prejudicial bias. There is much debate on this topic as the bias may be inherently designed into the model—i.e., human opinions expressed via mathematical models. The book “Weapons of Math Destruction” outlines many of the dangers associated with relying on algorithms and predictive models to solve problems. The author makes the argument that predictive models, despite the reputation for impartiality, often reflect a goal and/or ideology. The concerns associated with bias will be a topic of future blogs.

Let’s take a look at an illustrative example of how we could leverage AI to address one of society’s more pressing concerns—the rise of chronic disease. Chronic diseases such as type 2 diabetes, heart disease, stroke, arthritis, cancer, obesity, and asthma consume a disproportionate share of healthcare costs. The Center for Disease Control (CDC) estimates that eighty-six percent of the US’s ~$3 trillion annual health care expenditures are for people with chronic and mental health conditions2…let that sink in for a moment. There are a couple things in the prior sentence that should raise concern: first, the fact that the US spends approximately 20% (and growing) of its GDP on healthcare is a problem as this is not sustainable; and, second, the majority of this spend is allocated to diseases which are, to some degree, preventable.

The ability to proactively identify those who are at a high risk for a chronic disease and implement a plan to prevent the onset of the disease not only constitutes a significant win for the individual but also reduces the overall economic burden to the health system. Technology may very well be the disruptive factor that enables the much-needed transformation of the healthcare ecosystem.

It is not possible to screen the entire population for the predisposition of developing a chronic disease. A more realistic approach is to develop a predictive model that has the ability identify those most at risk—i.e., what is the likelihood of an individual developing a chronic disease. Those identified as “at risk” can be proactively engaged for a more thorough medical exam and a review of their lifestyle. The appropriate level of intervention can be determined once the person completes the initial interaction with their physician.

The starting point for this endeavor is data. We are going through a paradigm shift as it relates to data. Historically, data was a by-product of technology. It was created and stored. We are now in an era in which data is a resource and more data is a blessing. Data is the starting point and the enabler for AI.

The initial model will need historical (5–to–10 years prior) patient data from a random sample of people who were disease-free at the onset of the time horizon. The data elements will include items such as: gender, weight, age, race, marital status, location, previous medical history, smoking and alcohol consumption patterns and history, etc. The historical data is compared to what happened to the sample population over the defined time horizon. The data derived from the initial analysis will be used by sophisticated algorithms in a predictive model to find relationships that correlate with an individual developing a chronic disease or remaining disease-free. The AI will become more accurate over time as it “learns” what constitutes a correct outcome. The longitudinal data associated with patient outcomes can be aggregated and analyzed to further refine and improve treatment options.

There are obvious privacy concerns associated with this approach. For example, not many of us will appreciate being contacted out-of-the-blue by someone informing us that we are at high-risk for a disease and to make an appointment with our primary care physician. Instead, the screening should be part of a routing physical exam. Our primary care physician will discuss the merits of the program and we agree to participate. Health insurance providers should participate in this program and offer economic encouragement for those who participate in preventative health programs.

In summary, Artificial Intelligence has the potential to do many tasks that humans do and, potentially, do them just as well (or better), faster, and cheaper. The maturation of the technology is happening at a rate faster than many anticipated. For example, Google’s DeepMInd is not only able to beat the strategy game GO, but was able to teach itself the game without direct human engagement. Radiology departments are adopting AI and, in many cases, the technology is performing on par with trained clinicians. AI is being applied to early cancer detection and the initial results are promising.3 The relationship between humans and technology is at an inflection point. We need to think through what this means for society and how to best prepare for the forthcoming transformation.

  1. Nicolin Hainc, et. al, Frontiers in Neurology, “The Bright, Artificial Intelligence-Augmented Future of Neuroimaging Reading,” September 2017.
  2. Centers for Disease Control and Prevention, Chronic Disease Overview.
  3. Mukherjee, Sy, Fortune, “The New AI Can Detect a Deadly Cancer Early With 86% Accuracy,” Oct. 2017.