Skip to main content Skip to secondary navigation

Healthcare Analytics

Main content start

[go to the annotated version]

At the population level, AI’s ability to mine outcomes from millions of patient clinical records promises to enable finer-grained, more personalized diagnosis and treatment. Automated discovery of genotype-phenotype connections will also become possible as full, once-in-a-lifetime genome sequencing becomes routine for each patient.

A related (and perhaps earlier) capability will be to find “patients like mine” as a way to inform treatment decisions based on analysis of a similar cohort. Traditional and non-traditional healthcare data, augmented by social platforms, may lead to the emergence of self-defined subpopulations, each managed by a surrounding ecosystem of healthcare providers augmented with automated recommendation and monitoring systems.

These developments have the potential to radically transform healthcare delivery as medical procedures and lifetime clinical records for hundreds of millions of individuals become available. Similarly, the automated capture of personal environmental data from wearable devices will expand personalized medicine. These activities are becoming more commercially viable as vendors discover ways to engage large populations (e.g. ShareCare)[63] and then to create population-scale data that can be mined to produce individualized analytics and recommendations.

Unfortunately, the FDA has been slow to approve innovative diagnostic software, and there are many remaining barriers to rapid innovation. HIPAA (Health Insurance Portability and Accountability Act) requirements for protecting patient privacy create legal barriers to the flow of patient data to applications that could utilize AI technologies. Unanticipated negative effects of approved drugs could show up routinely, sooner, and more rigorously than they do today, but mobile apps that analyze drug interactions may be blocked from pulling the necessary information from patient records. More generally, AI research and innovation in healthcare are hampered by the lack of widely accepted methods and standards for privacy protection. The FDA has been slow to approve innovative software, in part due to an unclear understanding of the cost/benefit tradeoffs of these systems. If regulators (principally the FDA) recognize that effective post-marketing reporting is a dependable hedge against some safety risks, faster initial approval of new treatments and interventions may become possible.

Automated image interpretation has also been a promising subject of study for decades. Progress on interpreting large archives of weakly-labeled images, such as large photo archives scraped from the web, has been explosive. At first blush, it is surprising that there has not been a similar revolution in interpretation of medical images. Most medical imaging modalities (CT, MR, ultrasound) are inherently digital, the images are all archived, and there are large, established companies with internal R&D (e.g. Siemens, Philips, GE) devoted to imaging.

But several barriers have limited progress to date. Most hospital image archives have only gone digital over the past decade. More importantly, the problem in medicine is not to recognize what is in the image (is this a liver or a kidney?), but rather to make a fine-grained judgement about it (does the slightly darker smudge in the liver suggest a potentially cancerous tumor?). Strict regulations govern these high-stakes judgements. Even with state-of-the-art technologies, a radiologist will still likely have to look at the images, so the value proposition is not yet compelling. Also, healthcare regulations preclude easy federation of data across institutions. Thus, only very large organizations of integrated care, such as Kaiser Permanente, are able to attack these problems.

Still, automated/augmented image interpretation has started to gain momentum. The next fifteen years will probably not bring fully automated radiology, but initial forays into image “triage” or second level checking will likely improve the speed and cost-effectiveness of medical imaging. When coupled with electronic patient record systems, large-scale machine learning techniques could be applied to medical image data. For example, multiple major healthcare systems have archives of millions of patient scans, each of which has an associated radiological report, and most have an associated patient record. Already, papers are appearing in the literature showing that deep neural networks can be trained to produce basic radiological findings, with high reliability, by training from this data.[64]

 


[63] Sharecare, accessed August 1, 2016, https://www.sharecare.com.

[64] Hoo-Chang Shin, Holger R. Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M. Summers, “Deep Convolutional Neural Networks for Computer-aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning,” IEEE Transactions on Medical Imaging 35, no. 5 (2016): 1285-1298.

Cite This Report

Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller.  "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed:  September 6, 2016.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.