Skip to main content Skip to secondary navigation

SQ6. How has public sentiment towards AI evolved, and how should we inform/educate the public?

Main content start

Over the last few years, AI and related topics have gained traction in the zeitgeist. Research has tracked this trend quantitatively: In the 2017–18 session of the US Congress, for instance, mentions of AI-related words were more than ten times higher than in previous sessions. Similarly, web searches for “machine learning” have roughly doubled since 2016.1

Since the first AI100 report, public understanding has broadened and become more nuanced, starting to move beyond Terminator and robot overlord fears. Overtaking these concerns for many members of the public are the prospects of social and economic impacts from AI, especially negative impacts such as discriminatory effects, economic inequality, and labor replacement or exploitation, topics discussed extensively in the prior report. In addition, there is a great deal of discussion around the increasing risks of surveillance as well as how AI and social media are involved in manipulation and disinformation. These discussions contribute to growing concerns among researchers, policymakers, and governments about establishing and securing public trust in AI (and in science and technology more broadly). As a result, a wide range of initiatives have focused on the goal of promoting “trustworthy AI.”2

Public awareness of the benefits of AI skews toward anticipated breakthroughs in fields such as health and transportation, with comparatively less awareness of the existing benefits of AI embedded in applications already in widespread use. While popular culture regularly references the AI capabilities of virtual assistants such as Alexa, there is less public awareness of AI’s involvement in everyday technologies commonly accessed without the mediation of an artificial agent: benefits such as speech-to-text, language translation, interactive GPS navigation, web search, and spam filtering. Media coverage of AI often distorts and exaggerates its potential at both the positive and negative extremes, but it has helped to raise public awareness of legitimate concerns about AI bias, lack of transparency and accountability, and the potential of AI-driven automation to contribute to rising inequality.

There are notable regional and gender differences in public sentiment about AI, as revealed in a 2020 Pew survey:3 opinions in Asian countries are largely positive, while those of countries in the West are heavily divided and more skeptical. Men overall expressed far more positive attitudes about AI than women did. Educational differences were also significant; age and political orientation less so. A 2019 survey by the Centre for the Governance of AI4 at Oxford’s Future of Humanity Institute noted that positive attitudes about AI are greater among those who are “wealthy, educated, male, or have experience with technology.” 

Primary Drivers of Public Understanding and Sentiment

Recent media coverage has been heavily focused on the negative impacts of AI, including bias, disinformation, and deepfakes. Coverage in 2020 shifted somewhat to AI’s potential for supporting the pandemic response through contact tracing, transmission forecasting, and elder care, and coverage of some notable AI developments such as GPT-3 also spurred public interest. Since the public is not always able to discern which harms, risks, and benefits are assignable to artificial intelligence and machine learning and which emerge from broader technology platform and business-model-use cases (“surveillance capitalism,”5 for example, the assembly, maintenance, and trade in mass quantities of personal data) or from simpler algorithmic tools that don’t involve AI, some concerns may be misplaced.

Portrait of person
Neural networks, trained on tens of thousands of portrait photographs of faces, can now generate novel high-resolution images that appear compellingly like pictures of real human faces. The technology behind this development, generative adversarial networks (GANs), has advanced rapidly since its introduction in 2014. Current versions still include telltale visual artifacts, like the strangely absent right shoulder in this image. Nonetheless, the previously unattainable level of realism raises concerns about the use of this technology to spread realistic disinformation. From: https://github.com/NVlabs/stylegan2 .

AI researchers have not been as engaged publicly as they need to be, although there have been attempts to reach a broader audience. An example was a 2017 staged debate between Gary Marcus, NYU psychology professor and author, and Yann LeCun, chief AI scientist at Facebook and Turing Award winner, about how much specialized information we need to build into AI systems and how much they should learn for themselves.6 Generally, though, accurate scientific communication has not engaged a sufficiently broad range of publics in gaining a realistic understanding of AI’s limitations, strengths, social risks, and benefits, and tends to focus on new methods, applications, and progress toward artificial general intelligence. Given the historical boom/bust pattern in public support for AI,7 it is important that the AI community not overhype specific approaches or products and create unrealistic expectations.

Governments, universities, and nonprofits are attempting to broaden the reach of AI education, including investing in new AI-related curricula. Groups such as AI4ALL8 and AI4K129 are receiving increasing attention, supported by a sense that today’s students need to be prepared to live in and contribute to an AI-fueled world, as well as by widespread concerns about an overall lack of diversity in the field. At the college level, curricula that include data science and AI/data ethics are becoming more widespread. At the level of the research community, several prominent AI conferences now require that research papers include explicit broader impact statements that discuss positive and negative societal consequences of the work.10

Rhetoric surrounding an AI “race” between the US and China has framed investment and understanding about AI as an urgent national security issue.11 This attention also contributes to a divergence between the EU’s focus on AI governance and human rights protections and the US and UK’s focus on economic growth and national security.12 In addition to the “AI race” narrative are framings of AI as engaged in a zero-sum competition or battle for dominance with humans.13 Yet these framings both obscure the powerful human agencies that today constitute what we call “AI,” and feed a dangerous illusion of technological determinism: the false claim that new technologies such as AI shape society independently of human choices and values, in a manner that humans are helpless to control, alter or steer. In addition to disguising human responsibility for the future shape of AI, these race or competition narratives also obscure meaningful possibilities for the future of AI to be developed in collaborative and participatory ways, or designed to support and enhance human agency rather than undermine it.

Industry and tech evangelists and policymakers14 are another source of public information about AI. Most of the messaging surrounds promulgating “AI for good” and “responsible/ethical AI” narratives, although these messages raise some concerns about ethics-washing, or insincere corporate use of ethical framings to deflect regulatory and public scrutiny.15 A minority of voices are still pushing a narrative of technological determinism or inevitability,16 but more nuanced views of AI as a human responsibility are growing, including an increasing effort to engage with ethical considerations. For example, Google has teams that study ethics, fairness, and bias—although the company’s public credibility in this regard took a hit with the controversial departure of Timnit Gebru and Margaret Mitchell, co-leaders of one of the ethical AI teams.17 There has also been some industry advancement of progressive/reformist AI narratives that rethink the application of technology in social justice or critical theory terms,18 but it remains limited.

There is a growing critical narrative around unscientific commercial and academic attempts to use AI, particularly tools for facial, gait, or sentiment analysis for behavioral prediction or classification amounting to a “new phrenology.”19 A powerful movement against law-enforcement use of facial-recognition technology peaked in influence during summer 2020, in the wake of the protests against police violence and systemic racism. IBM, Amazon, and Microsoft all announced some sort of pause or moratorium on the use of the technology.20 Broad international movements in Europe, the US, China, and the UK have been pushing back against the indiscriminate use of facial-recognition systems on the general public.21

Improving and Widening Public Understanding of AI: Where Do We Go From Here?

The AI community could take a lesson from the climate-science community in how to improve its public outreach. Like so many scientists, climate researchers were initially reluctant to engage with outside audiences such as policymakers and the media. But over time it became clear that such engagement was essential to moving forward on some of the most pressing issues of our time—and, over the past decade or so, those scientists have made huge strides in public engagement. 

A similar transformation in AI would be beneficial as society grapples with the impacts of these technologies. Some existing programs are working to address these concerns; for instance, the American Association for the Advancement of Science focused its 2020–2021 Leshner Leadership Institute Public Engagement Fellowships on AI.22 But the challenge remains to identify which forms of public engagement are working—and also who we aren’t yet reaching.

To help focus public relations, the AI community should facilitate a clearer public understanding that reduces confusion between AI and other information technologies, without artificially separating AI from other tech and platform structures that heavily influence its development and deployment. We should help the public acquire a useful taxonomy of AI that will support them in making relevant distinctions between the very different types and uses of AI tools. We should also be very clear and consistent that, while we believe advances in AI technology are being made and can have profound benefits for society, we do not support misleading hype that makes it sound as if the latest breakthrough is the one that changes everything.

We should responsibly educate the public about AI, making clear that different publics and subgroups face very different risks from AI, have different social expectations and priorities, and stand to gain or lose much more from AI than other groups. Our public education efforts need to navigate the challenges of providing accurate, balanced information to the public without pretending that there is some single objective, disinterested, and neutral view of AI to present.

Most importantly, we need to move beyond the goal of educating or talking to the public and toward more participatory engagement and conversation with the public. Work has already begun in many organizations on developing more deliberative and participatory models of AI public engagement.23 Such efforts will be vital to boosting public interest in and capability for democratic involvement with AI issues that concern us all.


[1] Raymond Perrault, Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh Mishra, and Juan Carlos Niebles, “The AI Index 2019 Annual Report”, AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, December 2019. https://hai.stanford.edu/sites/default/files/ai_index_2019_report.pdf

[2] “Ethics guidelines for trustworthy AI,” European Commission Directorate‑General for Communications Networks, Content and Technology, 2019. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai 

[3] Cary Funk, Alec Tyson, Brian Kennedy and Courtney Johnson, “Publics Express a Mix of Views on AI, Childhood Vaccines, Food and Space Issues,” Pew Research Center, September 2019. https://www.pewresearch.org/science/2020/09/29/publics-express-a-mix-of-views-on-ai-childhood-vaccines-food-and-space-issues/

[4] Baobao Zhang and Allan Dafoe, “Artificial Intelligence: American Attitudes and Trends,” Center for the Governance of AI, Future of Humanity Institute, University of Oxford, January 2019. https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/index.html

[5] Shoshana Zuboff, The Age of Surveillance Capitalism, Profile Books, 2019. https://profilebooks.com/work/the-age-of-surveillance-capitalism/

[6] https://wp.nyu.edu/consciousness/innate-ai/

[7] https://en.wikipedia.org/wiki/AI_winter 

[8] https://ai-4-all.org/ 

[9] https://ai4k12.org/ 

[10] https://neuripsconf.medium.com/getting-started-with-neurips-2020-e350f9b39c28

[11] The National Security Commission on Artificial Intelligence Final Report (USA), 2021. https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf

[12] https://www.brookings.edu/blog/techtank/2020/12/08/new-white-house-guidance-downplays-important-ai-harms/

[13] https://www.theguardian.com/books/2021/may/16/daniel-kahneman-clearly-ai-is-going-to-win-how-people-are-going-to-adjust-is-a-fascinating-problem-thinking-fast-and-slow

[14] https://aiforgood.itu.int/

[15] https://www.technologyreview.com/2019/12/27/57/ai-ethics-washing-time-to-act/

[16] https://www.telegraph.co.uk/technology/2019/07/18/elon-musks-quest-stop-ai-apocalypse-merging-man-machines/

[17] https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.htmlhttps://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/

[18] Shakir Mohamed, Marie-Therese Png, and William Isaac, “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence,” Philosophy & Technology, 33, 659–684, 2020. https://deepmind.com/research/publications/Decolonial-AI-Decolonial-Theory-as-Sociotechnical-Foresight-in-Artificial-Intelligence

[19] https://www.americanscientist.org/article/the-dark-past-of-algorithms-that-associate-appearance-and-criminality

[20] https://www.zdnet.com/article/one-year-after-amazon-microsoft-and-ibm-ended-facial-recognition-sales-to-police-smaller-players-fill-void/

[21] https://www.nature.com/articles/d41586-020-03188-2

[22] https://www.aaas.org/page/2020-2021-leshner-leadership-institute-public-engagement-fellows-artificial-intelligence 

[23] https://www.thersa.org/blog/2019/10/talk-about-artificial-intelligence

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.