Skip to main content Skip to secondary navigation

Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report

Main content start

Welcome to the 2021 Report 

Download Full Report in PDF
Browse report online by clicking on section headings in the ‘2021 Report' box to the left.

Preface

In the five years since we released the first AI100 report, much has been written about the state of artificial intelligence and its influences on society. Nonetheless, AI100 remains unique in its combination of two key features. First, it is written by a Study Panel of core multi-disciplinary researchers in the field—experts who create artificial intelligence algorithms or study their influence on society as their main professional activity, and who have been doing so for many years. The authors are firmly rooted within the field of AI and provide an “insider’s” perspective. Second, it is a longitudinal study, with reports by such Study Panels planned once every five years, for at least one hundred years.

This report, the second in that planned series of studies, is being released five years after the first report.  Published on September 1, 2016, the first report was covered widely in the popular press and is known to have influenced discussions on governmental advisory boards and workshops in multiple countries. It has also been used in a variety of artificial intelligence curricula.   

In preparation for the second Study Panel, the Standing Committee commissioned two study-workshops held in 2019. These workshops were a response to feedback on the first AI100 report. Through them, the Standing Committee aimed to engage a broader, multidisciplinary community of scholars and stakeholders in its next study. The goal of the workshops was to draw on the expertise of computer scientists and engineers, scholars in the social sciences and humanities (including anthropologists, economists, historians, media scholars, philosophers, psychologists, and sociologists), law and public policy experts, and representatives from business management as well as the private and public sectors. 

An expanded Standing Committee, with more expertise in ethics and the social sciences, formulated a call and actively encouraged proposals from the international community of AI researchers and practitioners with a broad representation of fields relevant to AI’s impact in the world. By convening scholars from the full range of disciplines that rigorously explore ethical and societal impacts of technologies, the study-workshops were aimed at expanding and deepening discussions of the ways in which AI shapes the hopes, concerns, and realities of people’s lives, and, relatedly, the ethical and societal challenges that AI raises.

After circulating a call for proposals and reviewing more than 100 submissions from around the world, two workshops were selected for funding. One, on “Prediction in Practice,” studied the use of AI-driven predictions of human behavior, such as how likely a borrower is to eventually repay a loan, in settings where data and cognitive modeling fail to account for the social dimensions that shape people’s decision-making. The other, on “Coding Caring,” studied the challenges and opportunities of incorporating AI technologies into the process of humans caring for one another and the role that gender and labor relationships play in addressing the pressing need for innovation in healthcare.

Drawing on the findings from these study-workshops, as well as the annual AI Index report, a project spun off from AI100, the Standing Committee defined a charge for the Study Panel in the summer of 2019[1] [2020 Charge] and recruited Michael Littman, Professor of Computer Science at Brown University, to chair the panel. The 17-member Study Panel, composed of a diverse set of experts in AI, from academia and industry research laboratories, representing computer science, engineering, law, political science, policy, sociology, and economics, was launched in mid-fall 2020. In addition to representing a range of scholarly specialties, the panel had diverse representation in terms of home geographic regions, genders, and career stages.  As readers may note in the report, convening this diverse, interdisciplinary set of scholarly experts, allowed varying perspectives, rarely brought together, to be reconciled and juxtaposed within the report. The accomplishment of the Study Panel is that much more impressive considering the inability to meet face-to-face during the ongoing COVID-19 global pandemic.

Whereas the first study report focused explicitly on the impact of AI in North American cities, we sought for the 2021 study to explore in greater depth the impact that AI is having on people and societies worldwide. AI is being deployed in applications that touch people’s lives in a critical and personal way (for example, through loan approvals, criminal sentencing, healthcare, emotional care, and influential recommendations in multiple realms ). Since these society-facing applications will influence people’s relationship with AI technologies, as well as have far-reaching socioeconomic implications, we entitled the charge, “Permeating Influences of AI in Everyday Life: Hopes, Concerns, and Directions.

In addition to including topics directly related to these society-facing applications that resulted from the aforementioned workshops (as represented by WQ1 and WQ2 of this report), the Standing Committee carefully considered how to launch the Study Panel for the second report in such a way that it would set a precedent for all subsequent Study Panels, emphasizing the unique longitudinal aspect of the AI100 study. Motivated by the notion that it takes at least two points to define a line, as noted by AI100 founder Eric Horvitz, the Study Panel charge suggested a set of “standing questions” for the Study Panel to consider that could potentially then be answered by future Study Panels as well (as represented by SQ1-SQ12 of this report) and included a call to reflect on the first report, indicating what has changed and what remains the same (as represented by Annotations on the Previous Report).

While the scope of this charge was broader than the inaugural panel’s focus on typical North American cities, it still does not—and cannot—cover all aspects of AI's influences on society, leaving some topics to be introduced or explored further in subsequent reports. In particular, military applications were outside the scope of the first report; and while military AI is used as a key case study in one section of this report (SQ7), vigorous discussions of the subject are still continuing worldwide and opinions are evolving.

Like the first report, the second report aspires to address four audiences. For the general public, it aims to provide an accessible, scientifically and technologically accurate portrayal of the current state of AI and its potential. For industry, the report describes relevant technologies and legal and ethical challenges, and may help guide resource allocation. The report is also directed to local, national, and international governments to help them better plan for AI in governance. Finally, the report can help AI researchers, as well as their institutions and funders, to set priorities and consider the economic, ethical, and legal issues raised by AI research and its applications.

The Standing Committee is grateful to the members of the Study Panel for investing their expertise, perspectives, and significant time into the creation of this report. We are also appreciative of the contributions of the leaders and participants of the workshops mentioned above, as well as past members of the Standing Committee, whose contributions were invaluable in setting the stage for this report: Yoav Shoham and Deirdre Mulligan (2015- 2017); Tom Mitchell and Alan Mackworth (2015-2018); Milind Tambe (2018); and Eric Horvitz (2015-2019). We especially thank Professor Michael Littman for agreeing to serve as chair of the study and for his wise, skillful, and dedicated leadership of the panel, its discussions, and creation of the report.

Standing Committee of the One Hundred Year Study of Artificial Intelligence

  • Peter Stone, The University of Texas at Austin and Sony AI, Chair
  • Russ Altman, Stanford University
  • Erik Brynjolfsson, Stanford University
  • Vincent Conitzer, Duke University and University of Oxford
  • Mary L. Gray, Microsoft Research
  • Barbara Grosz, Harvard University
  • Ayanna Howard, The Ohio State University
  • Percy Liang, Stanford University
  • Patrick Lin, California Polytechnic State University
  • James Manyika, McKinsey & Company
  • Sheila McIlraith, University of Toronto
  • Liz Sonenberg, The University of Melbourne
  • Judy Wajcman, London School of Economics and The Alan Turing Institute

Organizers of the Preparatory Workshops

  • Thomas Arnold, Tufts University
  • Solon Barocas, Microsoft Research  
  • Miranda Bogen, Upturn
  • Morgan Currie, The University of Edinburgh
  • Andrew Elder, The University of Edinburgh
  • Jessica Feldman, American University of Paris
  • Johannes Himmelreich, Syracuse University
  • Jon Kleinberg, Cornell University
  • Karen Levy, Cornell University
  • Fay Niker, Cornell Tech
  • Helen Nissenbaum, Cornell Tech
  • David G. Robinson, Upturn

About AI100

The following history of AI100 first appeared in the preface of the 2016 report.

“The One Hundred Year Study on Artificial Intelligence (AI100), launched in the fall of 2014, is a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. It considers the science, engineering, and deployment of AI-enabled computing systems. As its core activity, the Standing Committee that oversees the One Hundred Year Study forms a Study Panel every five years to assess the current state of AI. The Study Panel reviews AI’s progress in the years following the immediately prior report, envisions the potential advances that lie ahead, and describes the technical and societal challenges and opportunities these advances raise, including in such arenas as ethics, economics, and the design of systems compatible with human cognition. The overarching purpose of the One Hundred Year Study's periodic expert review is to provide a collected and connected set of reflections about AI and its influences as the field advances. The studies are expected to develop syntheses and assessments that provide expert-informed guidance for directions in AI research, development, and systems design, as well as programs and policies to help ensure that these systems broadly benefit individuals and society.

“The One Hundred Year Study is modeled on an earlier effort informally known as the “AAAI Asilomar Study.” During 2008-2009, the then president of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz, assembled a group of AI experts from multiple institutions and areas of the field, along with scholars of cognitive science, philosophy, and law. Working in distributed subgroups, the participants addressed near-term AI developments, long-term possibilities, and legal and ethical concerns, and then came together in a three-day meeting at Asilomar to share and discuss their findings. A short written report on the intensive meeting discussions, amplified by the participants’ subsequent discussions with other colleagues, generated widespread interest and debate in the field and beyond.

“The impact of the Asilomar meeting, and important advances in AI that included AI algorithms and technologies starting to enter daily life around the globe, spurred the idea of a long-term recurring study of AI and its influence on people and society. The One Hundred Year Study was subsequently endowed at a university to enable extended deep thought and cross-disciplinary scholarly investigations that could inspire innovation and provide intelligent advice to government agencies and industry.”

Study Panel Members

  • Michael L. Littman, Brown University, Chair
  • Ifeoma Ajunwa, University of North Carolina
  • Guy Berger, LinkedIn
  • Craig Boutilier, Google
  • Morgan Currie, The The University of Edinburgh
  • Finale Doshi-Velez, Harvard University
  • Gillian Hadfield, University of Toronto
  • Michael C. Horowitz, University of Pennsylvania
  • Charles Isbell, Georgia Institute of Technology
  • Hiroaki Kitano, Sony AI, and Okinawa Institute of Science and Technology Graduate University
  • Karen Levy, Cornell University
  • Terah Lyons
  • Melanie Mitchell, Santa Fe Institute and Portland State University
  • Julie Shah, Massachusetts Institute of Technology
  • Steven Sloman, Brown University
  • Shannon Vallor, The University of Edinburgh
  • Toby Walsh, University of New South Wales

Read Panel Members Bio 

Standing Committee Members on the One Hundred Year Study of Artificial Intelligence

  • Peter Stone, The University of Texas at Austin and Sony AI, Chair
  • Russ Altman, Stanford University
  • Erik Brynjolfsson, Stanford University
  • Vincent Conitzer, Duke University and University of Oxford
  • Mary L. Gray, Microsoft Research
  • Barbara Grosz, Harvard University
  • Ayanna Howard, The Ohio State University
  • Percy Liang, Stanford University
  • Patrick Lin, California Polytechnic State University
  • James Manyika, McKinsey & Company
  • Sheila McIlraith, University of Toronto
  • Liz Sonenberg, The University of Melbourne
  • Judy Wajcman, London School of Economics and The Alan Turing Institute

Acknowledgements

The panel would like to thank the members of the Standing Committee, listed in the preface. In addition to setting the direction and vision for the report, they provided detailed and truly insightful comments on everything from tone to detailed word choices that made the report clearer and, we hope!, more valuable in the long run. Standing Committee chair Peter Stone, in particular, deserves particular credit for his remarkable ability to find ways to negotiate clever solutions to the not-uncommon differences of opinion that inevitably arise by design of having a diverse set of contributors. We are grateful to Hillary Rosner, who, with the help of Philip Higgs and Stephen Miller, provided exceptionally valuable writing and editorial support. Jacqueline Tran and Russ Altman were deeply and adeptly involved in coordinating the efforts of both the Standing Committee and the Study Panel. We also thank colleagues who have provided pointers or feedback or other insights that helped inform our treatment on technical issues such as the use of AI in healthcare. They include: Nigam Shah, Jenna Wiens, Mark Sendak, Michael Sjoding, Jim Fackler, Mert Sabuncu, Leo Celi, Susan Murphy, Dan Lizotte, Jacqueline Kueper, Ravninder Bahniwal, Leora Horwitz, Russ Greiner, Philip Resnik, Manal Siddiqui, Jennifer Rizk, Martin Wattenberg, Na Li, Weiwei Pan, Carlos Carpi, Yiling Chen, Sarah Rathnam.

 

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.