Skip to main content Skip to secondary navigation

SQ9. What are the most promising opportunities for AI?

Main content start

This section describes active areas of AI research and innovation poised to make beneficial impact in the near term. Elsewhere, we address potential pitfalls to avoid in the same time frame.

We focus on two kinds of opportunities. The first involves AI that augments human capabilities. Such systems can be very valuable in situations where humans and AI have complementary strengths. For example, an AI system may be able to synthesize large amounts of clinical data to identify a set of treatments for a particular patient along with likely side effects; a human clinician may be able to work with the patient to identify which option best fits their lifestyle and goals, and to explore creative ways of mitigating side effects that were not part of the AI’s design space. The second category involves situations in which AI software can function autonomously. For example, an AI system may automatically convert entries from handwritten forms into structured fields and text in a database.

AI for Augmentation

Whether it’s finding patterns in chemical interactions that lead to a new drug discovery or helping public defenders identify the most appropriate strategies to pursue, there are many ways in which AI can augment the capabilities of people. Indeed, given that AI systems and humans have complementary strengths, one might hope that, combined, they can accomplish more than either alone. An AI system might be better at synthesizing available data and making decisions in well-characterized parts of a problem, while a human may be better at understanding the implications of the data (say if missing data fields are actually a signal for important, unmeasured information for some subgroup represented in the data), working with difficult-to-fully-quantify objectives, and identifying creative actions beyond what the AI may be programmed to consider. 

Unfortunately, several recent studies have shown that human-AI teams often do not currently outperform AI-only teams.1 Still, there is a growing body of work on methods to create more effective human-AI collaboration in both the AI and human-computer-interaction communities. As this work matures, we see several near-term opportunities for AI to improve human capabilities and vica versa. We describe three major categories of such opportunities below.

Drawing Insights

There are many applications in which AI-assisted insights are beginning to break new ground and have large potential for the future. In chemical informatics and drug discovery,2 AI assistance is helping identify molecules worth synthesizing in a wet lab. In the energy sector, patterns identified by AI algorithms are helping achieve greater efficiencies.3 By first training a model to be very good at making predictions, and then working to understand why those predictions are so good, we have deepened our scientific understanding of everything from disease4 to earthquake dynamics.5 AI-based tools will continue to help companies and governments identify bottlenecks in their operations.6

AI can assist with discovery. While human experts can always analyze an AI from the outside—for example, dissecting the innovative moves made by AlphaGo7—new developments in interpretable AI and visualization of AI are making it much easier for humans to inspect AI programs more deeply and use them to explicitly organize information in a way that facilitates a human expert putting the pieces together and drawing insights. For example, analysis of how an AI system internally organizes words (known as an embedding or a semantic representation) is helping us understand and visualize the way words like “awful” (formally “inspiring awe”) undergo semantic shifts over time.8 

Assisting with Decision-Making 

The second major area of opportunity for augmentation is for AI-based methods to assist with decision-making. For example, while a clinician may be well-equipped to talk through the side effects of different drug choices, they may be less well-equipped to identify a potentially dangerous interaction based on information deeply embedded in the patient’s past history.  A human driver may be well-equipped for making major route decisions and watching for certain hazards, while an AI driver might be better at keeping the vehicle in lane and watching for sudden changes in traffic flow. Ongoing research seeks to determine how to divide up tasks between the human user and the AI system, as well as how to manage the interaction between the human and the AI software. In particular, it is becoming increasingly clear that all stakeholders need to be involved in the design of such AI assistants to produce a human-AI team that outperforms either alone. Human users must understand the AI system and its limitations to trust and use it appropriately, and AI system designers must understand the context in which the system will be used (for example, a busy clinician may not have time to check whether a recommendation is safe or fair at the bedside).

There are several ways in which AI approaches can assist with decision-making. One is by summarizing data too complex for a person to easily absorb. In oncology and other medical fields, recent research in AI-assisted summarization promises to one day help clinicians see the most important information and patterns about a patient.9 Summarization is also now being used or actively considered in fields where large amounts of text must be read and analyzed—whether it is following news media, doing financial research, conducting search engine optimization, or analyzing contracts, patents, or legal documents. Summarization and interactive chat technologies have great potential to help ensure that people get a healthy breadth of information on a topic, and to help break filter bubbles rather than make them—by providing a range of information, or at least an awareness of the biases in one’s social-media or news feeds. Nascent progress in highly realistic (but currently not reliable or accurate) text generation, such as GPT-3, may also make these interactions more natural.

SQ9
Neural-network language models called “transformers” consisting of billions of parameters trained on billions of words of text, can be used for grammar correction, creative writing, and generating realistic text. In this example, the transformer-based GPT-3 produces a natural sounding product description for a non-existent, and likely physically impossible, toy. From: https://www.gwern.net/docs/www/justpaste.it/b5a07c7305ca81b0de2d324f094…

In addition to summarization, another aid for managing complex information is assisting with making predictions about future outcomes (sometimes also called forecasting or risk scoring). An AI system may be able to reason about the long-term effects of a decision, and so be able to recommend that a doctor ask for a particular set of tests, give a particular treatment, and so on, to improve long-term outcomes. AI-based early warning systems are becoming much more commonly used in health settings, agriculture, and more broadly. Conveying the likelihood of an unwanted outcome—be it a patient going into shock or an impending equipment failure—can help prevent a larger catastrophe. AI systems may also help predict the effects of different climate-change-mitigation or pandemic-management strategies and search among possible options to highlight those that are most promising. These forecasting systems typically have limits and biases based on the data they were trained on, and there is also potential for misuse if people overtrust their predictions or if the decisions impact people directly.

AI systems increasingly have the capacity to help people work more efficiently. In the public sector, relatively small staffs must often process large numbers of public comments, complaints, potential cases for a public defender, requests for corruption investigations, and more, and AI methods can assist in triaging the incoming information. On education platforms, AI systems can provide initial hints to students and flag struggling students to educators. In medicine, smartphone-based pathology processing can allow for common diagnoses to be made without trained pathologists, which is especially crucial in low-resource settings. Language processing tools can help identify mental health concerns at both a population and individual scale and enable, for example, forum moderators to identify individuals in need of rapid intervention. AI systems can help assist both clinicians and patients in deciding when a clinic visit is needed and provide personalized prevention and wellness assistance in the meantime. More broadly, chatbots and other AI programs can help streamline business operations, from financial to legal. As always, while these efficiencies have the potential to expand the positive impact of 

AI as Assistant

A final major area of opportunity for augmentation is for AI to provide basic assistance during a task. For example, we are already starting to see AI programs that can process and translate text from a photograph, allowing travelers to read signage and menus. Improved translation tools will facilitate human interactions across cultures. Projects that once required a person to have highly specialized knowledge or copious amounts of time—from fixing your sink to creating a diabetes-friendly meal—may become accessible to more people by allowing them to search for task- and context-specific expertise (such as adapting a tutorial video to apply to unique sink configuration).

Basic AI assistance has the potential to allow individuals to make more and better decisions for themselves. In the area of health, the combination of sensor data and AI analysis is poised to help promote a range of behavior changes, including exercise, weight loss, stress management, 

and dental hygiene. Automated systems are already in use for blood-glucose control and providing ways to monitor and coordinate care at home. AI-based tools can allow people with various disabilities—such as limitations in vision, hearing, fine and gross mobility, and memory—to live more independently and participate in more activities. Many of these programs can run on smartphones, further improving accessibility.

Simple AI assistance can also help with safety and security. We are starting to see lane-keeping assistance and other reaction-support features in cars. It is interesting that self-driving cars have been slow in development and adoption, but the level of automation and assistance in “normal” cars is increasing—perhaps because drivers value their (shared) autonomy with the car, and also because AI-based assistance takes certain loads off the driver while letting them do more nuanced tasks (such as waving or making eye contact with a pedestrian to signal they can cross). AI-assisted surgery tools are helping make movements in surgical operations more precise. AI-assisted systems flag potential email-based phishing attacks to be checked by the user, and others monitor transactions to identify everything from fraud to cyberattacks.

AI Agents on Their Own

Finally, there is a range of opportunities for AI agents acting largely autonomously or not in close connection with humans. Alphafold recently made significant progress toward solving the protein-folding problem, and we can expect to see significantly more AI-based automation in chemistry and biology. AI systems now help convert handwritten forms into structured fields, are 

starting to automate medical billing, and have been used recently to scale efforts to monitor habitat biodiversity. They may also help monitor and adjust operations in fields like clean energy, logistics, and communications; track and communicate health information to the public; and create smart cities that make more efficient use of public services, better manage traffic, and reduce climate impacts. The pandemic saw a rise in fully AI-based education tools that attempt to teach without a human educator in the loop, and there is a great deal of potential for AI to assist with virtual reality scenarios for training, such as practicing how to perform a surgery or carry out disaster relief. We expect many mundane and potentially dangerous tasks to be taken over by AI systems in the near future.

In most cases, the main factors holding back these applications are not in the algorithms themselves, but in the collection and organization of appropriate data and the effective integration of these algorithms into their broader sociotechnical systems. For example, without significant human-engineered knowledge, existing machine-learning algorithms struggle to generalize to “out of sample” examples that differ significantly from the data on which they were trained. Thus, if Alphafold, trained on natural proteins, fails on synthetic proteins, or if a handwriting-recognition system trained on printed letters fails on cursive letters, these failures are due to the way the algorithms were trained, not the algorithms per se. (Consider the willingness of big tech companies like Facebook, Google, and Microsoft to share their deep learning algorithms and their reluctance to share the data they use in-house.) 

Similarly, most AI-based decision-making systems require a formal specification of a reward or cost function, and eliciting and translating such preferences from multiple stakeholders remains a challenging task. For example, an AI controller managing a wind farm has to manage “standard” objectives such as maximizing energy produced and minimizing maintenance costs, but also harder-to-quantify preferences such as reducing ecological impact and noise to neighbors. As with the issue of insufficient relevant data, a failure of the AI in these cases is due to the way it was trained—on incorrect goals—rather than the algorithm itself. 

In some cases, further challenges to the integration of AI systems come in the form of legal or economic incentives; for example, malpractice and compliance concerns have limited the penetration of AI in the health sector. Regulatory frameworks for safe, responsible innovation will be needed to achieve these possible near-term beneficial impacts.

In addition to summarization, another aid for managing complex information is assisting with making predictions about future outcomes (sometimes also called forecasting or risk scoring). An AI system may be able to reason about the long-term effects of a decision, and so be able to recommend that a doctor ask for a particular set of tests, give a particular treatment, and so on, to improve long-term outcomes. AI-based early warning systems are becoming much more commonly used in health settings,10 agriculture,11 and more broadly. Conveying the likelihood of an unwanted outcome—be it a patient going into shock or an impending equipment failure—can help prevent a larger catastrophe. AI systems may also help predict the effects of different climate-change-mitigation or pandemic-management strategies and search among possible options to highlight those that are most promising.12 These forecasting systems typically have limits and biases based on the data they were trained on, and there is also potential for misuse if people overtrust their predictions or if the decisions impact people directly.

AI systems increasingly have the capacity to help people work more efficiently. In the public sector, relatively small staffs must often process large numbers of public comments, complaints, potential cases for a public defender, requests for corruption investigations, and more, and AI methods can assist in triaging the incoming information. On education platforms, AI systems can provide initial hints to students and flag struggling students to educators. In medicine, smartphone-based pathology processing can allow for common diagnoses to be made without trained pathologists, which is especially crucial in low-resource settings.13 Language processing tools can help identify mental health concerns at both a population and individual scale and enable, for example, forum moderators to identify individuals in need of rapid intervention.14 AI systems can help assist both clinicians and patients in deciding when a clinic visit is needed and provide personalized prevention and wellness assistance in the meantime.15 More broadly, chatbots and other AI programs can help streamline business operations, from financial to legal. As always, while these efficiencies have the potential to expand the positive impact of low-resourced, beneficial organizations, such systems can also result in harm when designed or integrated in ways that do not fully and ethically consider their sociotechnical context.16 

Finally, AI systems can help human decision-making by leveling the playing field of information and resources. Especially as AI becomes more applicable in lower-data regimes, predictions can increase economic efficiency of everyday users by helping people and businesses find relevant opportunities, goods, and services, matching producers and consumers. These uses go beyond major platforms and electronic marketplaces; kidney exchanges,17 for example, save many lives, combinatorial markets allow goods to be allocated fairly, and AI-based algorithms help select representative populations for citizen-based policy-making meetings.18

AI as Assistant

A final major area of opportunity for augmentation is for AI to provide basic assistance during a task. For example, we are already starting to see AI programs that can process and translate text from a photograph,19 allowing travelers to read signage and menus. Improved translation tools will facilitate human interactions across cultures. Projects that once required a person to have highly specialized knowledge or copious amounts of time—from fixing your sink to creating a diabetes-friendly meal—may become accessible to more people by allowing them to search for task- and context-specific expertise (such as adapting a tutorial video to apply to unique sink configuration).

Basic AI assistance has the potential to allow individuals to make more and better decisions for themselves. In the area of health, the combination of sensor data and AI analysis is poised to help promote a range of behavior changes, including exercise, weight loss, stress management, and dental hygiene.20 Automated systems are already in use for blood-glucose control21 and providing ways to monitor and coordinate care at home. AI-based tools can allow people with various disabilities—such as limitations in vision, hearing, fine and gross mobility, and memory—to live more independently and participate in more activities. Many of these programs can run on smartphones, further improving accessibility.22

Simple AI assistance can also help with safety and security. We are starting to see lane-keeping assistance and other reaction-support features in cars.23 It is interesting that self-driving cars have been slow in development and adoption, but the level of automation and assistance in “normal” cars is increasing—perhaps because drivers value their (shared) autonomy with the car, and also because AI-based assistance takes certain loads off the driver while letting them do more nuanced tasks (such as waving or making eye contact with a pedestrian to signal they can cross). AI-assisted surgery tools are helping make movements in surgical operations more precise.24 AI-assisted systems flag potential email-based phishing attacks to be checked by the user, and others monitor transactions to identify everything from fraud to cyberattacks.

AI Agents on Their Own

Finally, there is a range of opportunities for AI agents acting largely autonomously or not in close connection with humans. Alphafold25 recently made significant progress toward solving the protein-folding problem, and we can expect to see significantly more AI-based automation in chemistry and biology. AI systems now help convert handwritten forms into structured fields, are starting to automate medical billing, and have been used recently to scale efforts to monitor habitat biodiversity.26 They may also help monitor and adjust operations in fields like clean energy, logistics, and communications; track and communicate health information to the public; and create smart cities that make more efficient use of public services, better manage traffic, and reduce climate impacts. The pandemic saw a rise in fully AI-based education tools that attempt to teach without a human educator in the loop, and there is a great deal of potential for AI to assist with virtual reality scenarios for training, such as practicing how to perform a surgery or carry out disaster relief. We expect many mundane and potentially dangerous tasks to be taken over by AI systems in the near future.

In most cases, the main factors holding back these applications are not in the algorithms themselves, but in the collection and organization of appropriate data and the effective integration of these algorithms into their broader sociotechnical systems. For example, without significant human-engineered knowledge, existing machine-learning algorithms struggle to generalize to “out of sample” examples that differ significantly from the data on which they were trained. Thus, if Alphafold, trained on natural proteins, fails on synthetic proteins, or if a handwriting-recognition system trained on printed letters fails on cursive letters, these failures are due to the way the algorithms were trained, not the algorithms per se. (Consider the willingness of big tech companies like Facebook, Google, and Microsoft to share their deep learning algorithms and their reluctance to share the data they use in-house.) 

Similarly, most AI-based decision-making systems require a formal specification of a reward or cost function, and eliciting and translating such preferences from multiple stakeholders remains a challenging task. For example, an AI controller managing a wind farm has to manage “standard” objectives such as maximizing energy produced and minimizing maintenance costs, but also harder-to-quantify preferences such as reducing ecological impact and noise to neighbors. As with the issue of insufficient relevant data, a failure of the AI in these cases is due to the way it was trained—on incorrect goals—rather than the algorithm itself. 

In some cases, further challenges to the integration of AI systems come in the form of legal or economic incentives; for example, malpractice and compliance concerns have limited the penetration of AI in the health sector. Regulatory frameworks for safe, responsible innovation will be needed to achieve these possible near-term beneficial impacts.


[1] Ben Green, Yiling Chen, "Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments," Proceedings of the Conference on Fairness, Accountability, and Transparency, January 2019, https://dl.acm.org/doi/10.1145/3287560.3287563; Vivian Lai and Chenhao Tan, "On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection," Proceedings of the Conference on Fairness, Accountability, and Transparency, January 2019, https://arxiv.org/pdf/1811.07901v4.pdf; Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, Hanna Wallach, "Manipulating and Measuring Model Interpretability," https://arxiv.org/abs/1802.07810v5; Maia Jacobs, Melanie F. Pradier, Thomas H. McCoy Jr., Roy H. Perlis, Finale Doshi-Velez, and Krzysztof Z. Gajos, "How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection," Translational Psychiatry, Volume 11, 2021.  https://www.nature.com/articles/s41398-021-01224-x

[2]  https://techcrunch.com/2016/08/08/machine-learning-and-molecular-tinder-may-change-the-game-for-oled-screens/

[3] Xin Chen, Guannan Qu, Yujie Tang, Steven Low, and Na Li, "Reinforcement Learning for Decision-Making and Control in Power Systems: Tutorial, Review, and Vision."  https://arxiv.org/abs/2102.01168v4 

[4] Edward Korot, Nikolas Pontikos, Xiaoxuan Liu, Siegfried K. Wagner, Livia Faes, Josef Huemer, Konstantinos Balaskas, Alastair K. Denniston, Anthony Khawaja, and Pearse A. Keane, "Predicting sex from retinal fundus photographs using automated deep learning," Scientific Reports, Volume 11, 2021. https://www.nature.com/articles/s41598-021-89743-x 

[5] https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf

[6] https://hbr.org/2018/01/artificial-intelligence-for-the-real-world

[7] https://www.usgo.org/news/category/go-news/computer-goai/masteralphago-commentaries/ 

[8] https://nlp.stanford.edu/projects/histwords/ 

[9] Rimma Pivovarov and Noémie Elhadad, “Automated methods for the summarization of electronic health records,” Journal of the American Medical Informatics Association, vol. 22,5 , 2015 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4986665/ 

[10] Mark P. Sendak, Joshua D’Arcy, Sehj Kashyap, Michael Gao, Marshall Nichols, Kristin Corey, William Ratliff, Suresh Balu, "A path for translation of machine learning products into healthcare delivery," EMJ Innov., 2020 https://www.emjreviews.com/innovations/article/a-path-for-translation-of-machine-learning-products-into-healthcare-delivery/

[11] https://africa.ai4d.ai/blog/computer-vision-tomato/ 

[12] Nicolas Hoertel, Martin Blachier, Carlos Blanco, Mark Olfson, Marc Massetti, Marina Sánchez Rico, Frédéric Limosin, and Henri Leleu, "A stochastic agent-based model of the SARS-CoV-2 epidemic in France," Nature Medicine, volume 26, 2020 https://doi.org/10.1038/s41591-020-1001-6

[13] https://www.cnn.com/2018/12/14/health/ugandas-first-ai-lab-develops-malaria-detection-app-intl  

[14] Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine, “Natural Language Processing of Social Media as Screening for Suicide Risk.” Biomedical informatics insights, volume 10, August 2018 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6111391/

[15] For example, care coordination work at Vector, Babylon Health.

[16] https://www.humanrightspulse.com/mastercontentblog/dutch-court-finds-syri-algorithm-violates-human-rights-norms-in-landmark-case;  Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, St. Martin's Press, 2018 https://virginia-eubanks.com/ 

[17] John P. Dickerson and Tuomas Sandholm, "FutureMatch: Combining Human Value Judgments and Machine Learning to Match in Dynamic Environments," 2015 https://www.cs.cmu.edu/~sandholm/futurematch.aaai15.pdf 

[18] Bailey Flanigan, Paul Gölz, Anupam Gupta, and Ariel Procaccia, "Neutralizing Self-Selection Bias in Sampling for Sortition," 2020 https://arxiv.org/abs/2006.10498v2

[19] https://support.google.com/translate/answer/6142483

[20] Shuang Li, Alexandra M. Psihogios, Elise R. McKelvey, Annisa Ahmed, Mashfiqui Rabbi, and Susan Murphy, "Microrandomized trials for promoting engagement in mobile health data collection: Adolescent/young adult oral chemotherapy adherence as an example," Current Opinion in Systems Biology, Volume 21, June 2020 https://www.sciencedirect.com/science/article/abs/pii/S245231002030007X; Predrag Klasnja, Shawna Smith,  Nicholas J Seewald, Andy Lee, Kelly Hall, Brook Luers, Eric B. Hekler, and Susan A. Murphy, "Efficacy of Contextually Tailored Suggestions for Physical Activity: A Micro-randomized Optimization Trial of HeartSteps," Ann Behav Med., May 2019  https://pubmed.ncbi.nlm.nih.gov/30192907/; Ryan P. Westergaard, Andrew Genz, Kristen Panico, Pamela J. Surkan, Jeanne Keruly, Heidi E. Hutton, Larry W. Chang, and Gregory D. Kirk, "Acceptability of a mobile health intervention to enhance HIV care coordination for patients with substance use disorders," Addict Sci Clin Pract., April 2017 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5405459/

[21] https://www.diabetes.org/newsroom/press-releases/2020/next-generation-automatic-insulin-delivery-system-improves-glycemic-control-in-people-with-t1d

[22] https://www.everydaysight.com/best-apps-for-visually-impaired/https://learningenglish.voanews.com/a/google-s-lookout-app-helps-blind-people-experience-the-world/4827509.html, https://blog.ai-media.tv/blog/6-awesome-accessibility-apps, https://blog.google/outreach-initiatives/accessibility/supporting-people-disabilities-be-my-eyes-and-phone-support-now-available/, https://play.google.com/store/apps/details?id=com.righthear

[23] Rebecca Spicer, Amin Vahabaghaie, George Bahouth, Ludwig Drees, Robert Martinez von Bülow and Peter Baur, "Field effectiveness evaluation of advanced driver assistance systems," Traffic Injury Prevention, Volume 19, 2018 https://www.tandfonline.com/doi/abs/10.1080/15389588.2018.1527030 

[24] Daniel A. Hashimoto, MD, MS, Guy Rosman, PhD, Daniela Rus, PhD, and Ozanan R. Meireles, MD, FACS, “Artificial Intelligence in Surgery: Promises and Perils,” Annals of surgery, Volume 268,1, 2018 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5995666/ 

[25] https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology 

[26] https://blog.nationalgeographic.org/2018/06/26/the-california-academy-of-sciences-and-national-geographic-society-join-forces-to-enhance-global-wildlife-observation-network-inaturalist/

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.