Skip to main content Skip to secondary navigation

Annotations on the 2016 Report

Main content start

The Study Panel added annotations to the 2016 report to highlight places where comparisons between the two reports were illuminating. The online version of the report includes these annotations as hover text and links. This section summarizes briefly some of the high-level comparisons.

The 2016 panel focused their report on the North American context and considered discussion of defense and military applications of AI to be out of scope. The 2021 report includes several comments about how AI is recognized as influencing and being influenced by geopolitical and international security considerations. These include the observation that emerging regulatory approaches vary across regions, and that AI and the “AI race” are viewed as issues of national security. In addition, sentiments regarding military applications of AI influence the research directions of some scientists. In the US, the defense department’s investments in technology have helped spur some of the most important advances in AI in the past five years. The report includes recommendations for further investments in the creation of federal data and computational resources. Finally, numerous countries have begun to develop national AI policy strategies and to legislate the use of AI technologies.

The 2016 report listed a set of challenges associated with the future of AI, including: developing safe and reliable hardware for transportation and service robotics; challenges in “smoothly” interfacing AI with human experts; gaining public trust; overcoming fears of marginalization of humans with respect to employment and the workplace; and diminishing interpersonal interactions (for example, through new entertainment technologies). In contrast, the 2021 report details social and ethical concerns and harms related to the conception, implementation, and deployment of AI technologies. Many of the descriptions of potential harms foreshadowed in 2016 were counterbalanced with abstract descriptions of a different, more positive possible future that could be achieved “through careful deployment.” The 2021 report makes clear that many concerns and harms are no longer hypothetical, and are not merely technological problems to be solved. The shift in views regarding social and ethical concerns can be seen in the use of terms like “bias,” “privacy, ”security,” and “safety” in the 2016 and 2021 reports, as highlighted in the annotations.

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.