Annotations on the 2016 Report
The Study Panel added annotations to the 2016 report to highlight places where comparisons between the two reports were illuminating. The online version of the report includes these annotations as hover text and links. This section summarizes briefly some of the high-level comparisons.
The 2016 panel focused their report on the North American context and considered discussion of defense and military applications of AI to be out of scope. The 2021 report includes several comments about how AI is recognized as influencing and being influenced by geopolitical and international security considerations. These include the observation that emerging regulatory approaches vary across regions, and that AI and the “AI race” are viewed as issues of national security. In addition, sentiments regarding military applications of AI influence the research directions of some scientists. In the US, the defense department’s investments in technology have helped spur some of the most important advances in AI in the past five years. The report includes recommendations for further investments in the creation of federal data and computational resources. Finally, numerous countries have begun to develop national AI policy strategies and to legislate the use of AI technologies.
The 2016 report listed a set of challenges associated with the future of AI, including: developing safe and reliable hardware for transportation and service robotics; challenges in “smoothly” interfacing AI with human experts; gaining public trust; overcoming fears of marginalization of humans with respect to employment and the workplace; and diminishing interpersonal interactions (for example, through new entertainment technologies). In contrast, the 2021 report details social and ethical concerns and harms related to the conception, implementation, and deployment of AI technologies. Many of the descriptions of potential harms foreshadowed in 2016 were counterbalanced with abstract descriptions of a different, more positive possible future that could be achieved “through careful deployment.” The 2021 report makes clear that many concerns and harms are no longer hypothetical, and are not merely technological problems to be solved. The shift in views regarding social and ethical concerns can be seen in the use of terms like “bias,” “privacy, ”security,” and “safety” in the 2016 and 2021 reports, as highlighted in the annotations.