Skip to main content Skip to secondary navigation

SQ7. How should governments act to ensure AI is developed and used responsibly?

Main content start

 As AI has grown in sophistication and its use has become more widespread, governments and public agencies have paid increasing attention to its development and deployment. This investment has been especially true in the last five years, when AI has become more and more commonly used in consumer products and as private and government applications such as facial recognition1 have captured increasing public attention.

Two people side by side on social media site
Facial recognition technology, demonstrated here via Google Photos on a 2019 photo taken at an AI conference, can spot a wide range of individuals in photos and associate them with their names. Applying the same ideas to massive collections of imagery posted online makes it possible to spot and name strangers in public. The capability raises concerns about how AI can simplify mass intrusions into the privacy rights of citizens by governments and private companies all over the world. From: Michael Littman and https://photos.google.com/.

Since the publication of the last AI100 report just five years ago, over 60 countries have engaged in national AI initiatives, and several significant new multilateral efforts are aimed at spurring effective international cooperation on related topics. Increases in international government attention to AI issues reflect an understanding that the topic is complex and intersects with other policy priorities, including privacy, equity, human rights, safety, economics, and national and international security.

Law, Policy, and Regulation

In the past few years, several legislative and international groups have awoken to the challenge of regulating AI effectively.4 Few countries have moved definitively to regulate AI specifically, outside of rules directly related to the use of data. Several international groups have developed efforts or initiatives aimed at generating policy frameworks for responsible AI development and use, resulting in recommendations such as the AI Principles of the 38-member-country Organisation for Economic Co-operation and Development.5 The EU has been the most active government body to date in proposing concrete regulatory frameworks for AI. In April 2021, it published a new Coordinated Plan on AI, a step toward building a legal framework that “will help to make Europe a safe and innovation friendly environment for the development of AI.”6

As of 2020, 24 countries—in Asia, Europe, and North America—had opted for permissive laws to allow autonomous vehicles to operate in limited settings. Thirteen countries—in Africa, Europe, and Latin America—had discussed legislation on the use of autonomous lethal weapons, discussed in more detail below; only Belgium had enacted law.7

A range of governance approaches have started to emerge to ensure public safety, consumer trust, product reliability, accountability, and oversight. These efforts involve governments and public agencies, corporations, and civil society, as well as cooperation between the public and private sectors. For example, the US is working actively to develop frameworks for AI risk assessment and regulatory guidance for federal agencies, and is investigating both regulatory and nonregulatory approaches to oversight for AI technologies.8 Such approaches might include sector-specific policy guidance, pilot studies, voluntary frameworks for compliance, formal standards, or other policy vehicles and related guidelines. This process necessarily involves the government identifying the statutory authorities of each agency.

The EU has been particularly active with concrete regulation, including the General Data Protection Regulation (GDPR), which includes some regulation of automated decision systems, and the Framework of Ethical Aspects of AI Robotics and Related Technologies, which proposes the creation of national supervisory bodies and the designation of high-risk technologies.9 Canada's Bill C-11 proposes regulation of automated decision systems and has more robust support for people’s right to explanations of automated decisions than the EU’s approach.10 Governmental consideration of antitrust action against big tech companies in the EU and US is driven in large part by the scale that has been achieved with the help of AI techniques. Interest in more concrete antitrust activity in the US, in particular, has increased with the Biden Administration, for example with the President’s July 2021 Executive Order on competition, which prominently features the American information technology sector.11

Because AI is not just one technology but a constellation of capabilities being applied to diverse domains, the US regulates it as distinct products and applications—a reflection of the government’s structure and resulting regulatory practices. For example, autonomous vehicle safety guidance and related policies fall under the purview of the Department of Transportation, while oversight and policies for healthcare applications fall to agencies such as the Food and Drug Administration. A cross-industry approach toward AI regulation related to more specific issues, such as data use, has the potential to provide more consistency, although it is still too early to formulate an informed policy along these lines.

For some technology areas, however, it is less clear-cut where government responsibility for regulation is situated. For example, the oversight of social media platforms has become a hotly debated issue worldwide. As user bases for these companies have grown, so too have the companies’ reach and power in issues ranging from medical misinformation to undermining elections. In 2020, approximately 48.3 percent of the global population used social media—a share projected to increase to over 56 percent by 2025.12 International oversight of some kind is essential to minimize the risks to consumers worldwide.

Misinformation and disinformation are affected by a given platform’s user bases, optimization algorithms, content-moderation policies and practices, and much more. In the US, free speech challenges are governed by constitutional law and related legal interpretations. Some companies have even gone so far as to appoint independent oversight boards, such as the Facebook Oversight Board created in 2020, to make determinations about enacting—and applying—corporate policy related to free speech issues to avoid stronger government oversight. Most content moderation decisions are made by individual companies on the basis of their own legal guidance, technical capacity, and policy interpretations, but debate rages on about whether active regulation would be appropriate. There are few easy answers about what kinds of policies should be enacted, and how, and who should regulate them. 

AI Research & Development as a Policy Priority

Globally, investment in AI research and development (R&D) in the past five years by both corporations and governments has grown significantly.13 In 2020, the US government’s investment in unclassified R&D in AI-related technologies was approximately $1.5 billion14—a number dwarfed significantly by estimates of the investments being made by top private sector companies in the same year. In 2015, an Obama Administration report made several projections, since borne out, about the near future of AI R&D: AI technologies will grow in sophistication and ubiquity; the impact of AI on employment, education, public safety, national security, and economic growth will continue to increase; industry investment in AI will grow; some important areas concerning the public good will receive insufficient investment by industry; and the broad demand for AI expertise will grow, leading to job-market pressures.15 The final report of the US National Security Commission on Artificial Intelligence, published in 2021,16 echoes similar themes. 

The Chinese Government’s investment in AI research and development in 2018 was estimated to be the equivalent of $9.4 billion, supplemented by significant government support for private investment and strategy development.17 In Europe, significant public investment increases have been made over the last five years, accompanied by a sweeping EU-led strategy with four prongs: enabling the development and uptake of AI in the EU; making the EU the place where AI thrives from the lab to the market; ensuring that AI works for people and is a force for good in society; and building strategic leadership in high-impact sectors.18

Overall, global governments need to invest more significantly in research, development, and regulation of issues surrounding AI, and in multidisciplinary and cross-disciplinary research in support of these objectives. Government investment should also include supporting K-12 education standards to help the next generation to live in a world infused with AI applications, and shaping market practices concerning the use of AI in public-facing applications such as healthcare delivery.

Cooperation and Coordination on International Policy 

Cooperative efforts among countries have also emerged in the last several years. In March 2018, the European Commission established a high-level expert group to support strategy and policy development for AI.19 The same year, the Nordic-Baltic region released a joint strategy document,20 and the UAE and India signed a partnership to spur AI innovation.21 In 2020, the OECD launched an AI Policy Observatory, a resource repository for AI development and policy efforts. In June 2020, the G7, led by the Canadian and French governments, established the Global Partnership on AI, a multilateral effort to promote more effective international collaboration on issues of AI governance.23

Though almost all countries expending resources on AI see it as a set of enabling technologies of strategic importance, important differences in country-by-country approaches have also emerged. Notably, China’s record on human rights intersects meaningfully with its efforts to become a dominant AI power.24 Authoritarian powers can put AI to powerful use in building upon and reinforcing existing citizen surveillance programs, which has widespread implications for global AI governance and use—in China, but also everywhere else in the world.25 In addition, some international coordination could help ease tensions building up as nations strive to position themselves for dominance in AI.26 Recently, a significant multilateral initiative between the US and the EU has emerged to support more effective coordination and collaboration, with an explicit focus on issues like technology standards and the misuse of technology threatening security and human rights.27

Case Study: Lethal Autonomous Weapons

A case of a specific application of AI that has drawn international attention is lethal autonomous weapon systems (LAWS). LAWS are weapons that, after activation, can select and engage targets without further human intervention.28 Dozens of countries around the world have operated limited versions of these systems for decades. Close-in weapon systems that protect naval ships and military bases from attacks often have automatic modes that, once activated, select and engage attacking targets without human intervention (though with human oversight).29 But many AI and robotics researchers have expressed concerns about the way advances in AI, paired with autonomous systems, could generate new and dangerous weapon systems that threaten international stability.30 Many express specific concerns around the use of autonomous drones for targeted killing—such as accountability, proliferation, and legality.

One challenge for governments in navigating this debate is determining what exactly constitutes a LAWS, especially as smarter munitions increasingly incorporate AI to make them harder for adversary defenses to detect and destroy. The United Nations Convention on Certain Conventional Weapons (CCW) has debated LAWS since 2013. It has convened a Group of Government Experts (GGE) that has met regularly to discuss LAWS.31 While many smaller countries have endorsed a ban on LAWS, major militaries such as the United States and Russia, as well as NATO countries, have generally argued that LAWS are already effectively regulated under international humanitarian law, and that there are dangers in the unintended consequences of over-regulating technologies that have not yet been deployed.32

Regardless of how the LAWS debate in particular is resolved, the greater integration of AI by militaries around the world appears inevitable in areas such as training, logistics, and surveillance. Indeed, there are areas like mine clearing where this is to be welcomed. Governments will need to work hard to ensure they have the technical expertise and capacity to effectively implement safety and reliability standards surrounding these military uses of AI.33

From Principles to Practice

Beyond national policy strategies, dozens of governments, private companies, intergovernmental organizations, and research institutions have also published documents and guidelines designed to address concerns about safety, ethical design, and deployment of AI products and services. These documents often take the form of declarations of principles or high-level frameworks. Such efforts started becoming popular in 2017; in 2018, 45 of them were published globally. A total of at least 117 documents relating to AI principles were published between 2015 and 2020, the majority of which were published by companies.33

While these efforts are laudable, statements of responsible AI principles or frameworks in companies are of limited utility if they are incompatible with instruments of oversight, enforceability, or accountability applied by governments. Human rights scholars and advocates, for example, have long pushed for a rights-based approach to AI-informed decision-making, rooted in international law and applicable to a wide array of technologies, and adaptable even as AI itself continues to develop.34 Related debates have played out in government policy development, as in a 2019 discussion paper published by the Australian Human Rights Commission.35 Such arguments also point to the need for due process and recourse in AI decision-making.

Several efforts have been made to move organizations working on setting principles toward the implementation, testing, and codification of more effective practice in responsible AI development and deployment. Many see this direction as a precursor, or essential ingredient, to effective policy-making. Efforts developed in the last five years include the Partnership on AI, a nonprofit multi-stakeholder organization created in 2016 by technology companies, foundations, and civil society organizations focused on best-practice development for AI.36 Much of its work centers on best practice development for more responsible, safe, and user-centered AI, with the goal of ensuring more equitable, effective outcomes.

Dynamic Regulation, Experimentation, and Testing 

Appropriately addressing the risks of AI applications will inevitably involve adapting regulatory and policy systems to be more responsive to the rapidly advancing pace of technology development. Current regulatory systems are already struggling to keep up with the demands of technological evolution, and AI will continue to strain existing processes and structures.37 There are two key problems: Policy-making often takes time, and once rules are codified they are inflexible and difficult to adapt. Or, put a different way, AI moves quickly and governments move slowly.

To deal with this mismatch of timescales, several innovative solutions have been proposed or deployed that merit further consideration. Some US agencies, such as the Food and Drug Administration, already invest heavily in regulatory science—the study of the act of effective regulation itself. This kind of investigation involves research and testing to address gaps in scientific understanding or to develop tools and methods needed to inform regulatory decisions and policy development.38 (Should AI, for instance, be classified as a device, an aid, or a replacement for workers? The answer impacts how government oversight is applied.) Such approaches should be evangelized more widely, adopted by other agencies, and applied to new technology areas, including AI. Other proposals, drawing inspiration from industry approaches to developing goods and services, advocate for the creation of systems in which governments would hire private companies to act as regulators.39

Frameworks for “risk-based” rulemaking and impact assessments are also relevant to new AI-based technologies and capabilities. Risk-based regulatory approaches generally focus on activities that pose the highest risk to the public well-being, and in turn reduce burdens for a variety of lower-risk sectors and firms. In the AI realm specifically, researchers, professional organizations, and governments have begun development of AI or algorithm impact assessments (akin to the use of environmental impact assessments before beginning new engineering projects).40

Experimentation and testing efforts are an important aspect of both regulatory and nonregulatory approaches to rulemaking for AI, and can take place in both real-world and simulated environments. Examples include the US Federal Aviation Administration’s Unmanned Aircraft System (UAS) Test Sites program, which has now been running successfully for several years and feeds data, incident reports, and other crucial information directly back to the agency in support of rulemaking processes for safe UAS airspace integration.41 In the virtual world, simulations have been built for testing AI-driven tax policy proposals, among other ideas, before deployment.42 In some cases, third-party certification or testing efforts have emerged.43 As with any emerging technology—and especially one so diverse in its applications as AI—effective experimentation and testing can meaningfully support more effective governance and policy design.


[1] Daniel Zhang, Saurabh Mishra, Erik Brynjolfsson, John Etchemendy, Deep Ganguli, Barbara Grosz, Terah Lyons, James Manyika, Juan Carlos Niebles, Michael Sellitto, Yoav Shoham, Jack Clark, and Raymond Perrault, “The AI Index 2021 Annual Report,” AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, March 2021.

[2] https://www.oecd.ai/dashboards

[3] https://futureoflife.org/national-international-ai-strategies/

[4] Raymond Perrault, Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh Mishra, and Juan Carlos Niebles, “The AI Index 2019 Annual Report”, AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, December 2019, p. 139

[5] https://www.oecd-ilibrary.org/science-and-technology/state-of-implementation-of-the-oecd-ai-principles_1cd40c44-en 

[6] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence, https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206

[7] Lawrence Zhang, “Initiatives in AI Governance,” Innovating AI Governance: Shaping the Agenda for a Responsible Future December 2020, https://static1.squarespace.com/static/5ef0b24bc96ec4739e7275d3/t/5fb58df18fbd7f2b94b5b5cd/1605733874729/SRI+1+-+Initiatives+in+AI+Governance.pdf 

[8] https://www.nitrd.gov/nitrdgroups/images/c/c1/American-AI-Initiative-One-Year-Annual-Report.pdf 

[9] https://gdpr-info.eu/, https://www.europarl.europa.eu/doceo/document/TA-9-2020-0275_EN.html

[10] https://parl.ca/DocumentViewer/en/43-2/bill/C-11/first-reading 

[11] https://www.whitehouse.gov/briefing-room/presidential-actions/2021/07/09/executive-order-on-promoting-competition-in-the-american-economy/ 

[12] https://www.statista.com/statistics/260811/social-network-penetration-worldwide/ 

[13] Daniel Zhang, Saurabh Mishra, Erik Brynjolfsson, John Etchemendy, Deep Ganguli, Barbara Grosz, Terah Lyons, James Manyika, Juan Carlos Niebles, Michael Sellitto, Yoav Shoham, Jack Clark, and Raymond Perrault, “The AI Index 2021 Annual Report,” AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, March 2021, chapter 3

[14] https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf

[15] https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/national_ai_rd_strategic_plan.pdf 

[16] https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf 

[17] Ashwin Acharya Zachary Arnold, “Chinese Public AI R&D Spending: Provisional Findings,” https://cset.georgetown.edu/publication/chinese-public-ai-rd-spending-provisional-findings/

[18] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence; Erik Brattberg, Raluca Csernatoni, and Venesa Rugova,, “Europe and AI: Leading, Lagging Behind, or Carving Its Own Way?” Carnegie Endowment for International Peace, https://carnegieendowment.org/2020/07/09/europe-and-ai-leading-lagging-behind-or-carving-its-own-way-pub-82236 

[19] https://techcrunch.com/2018/06/14/here-are-the-experts-who-will-help-shape-europes-ai-policy 

[20] https://www.norden.org/en/declaration/ai-nordic-baltic-region

[21] https://gulfnews.com/uae/government/uae-and-india-sign-agreement-on-artificial-intelligence-1.2258074 

[22] https://oecd.ai/ 

[23] https://gpai.ai/ 

[24] https://www.gp-digital.org/wp-content/uploads/2020/04/National-Artifical-Intelligence-Strategies-and-Human-Rights%E2%80%94A-Review_.pdf 

[25] Steven Feldstein, “The Global Expansion of AI Surveillance,” Carnegie Endowment for International Peace, https://carnegieendowment.org/files/WP-Feldstein-AISurveillance_final1.pdf 

[26] https://www.nytimes.com/2018/03/06/business/us-china-trade-technology-deals.html

[27] https://ec.europa.eu/commission/presscorner/detail/en/IP_21_2990 

[28] https://fas.org/sgp/crs/natsec/IF11150.pdf 

[29] Michael Horowitz and Paul Scharre, “An Introduction to Autonomy in Weapon Systems,” Center for a New American Security, https://www.cnas.org/publications/reports/an-introduction-to-autonomy-in-weapon-systems 

[30] https://futureoflife.org/open-letter-autonomous-weapons/ 

[31] https://indico.un.org/event/35599/timetable/ 

[32] While China has theoretically endorsed a LAWS ban, China defines LAWS in such a way that a ban would not actually cover the systems that many are concerned about, and Chinese military research on potential uses of AI is extensive.

[33] The AI Index 2021 Annual Report, chapter 5

[34] https://www.bsr.org/en/our-insights/blog-view/human-rights-based-approach-to-artificial-intelligence 

[35] https://tech.humanrights.gov.au/sites/default/files/2019-12/TechRights_2019_DiscussionPaper.pdf 

[36] https://www.partnershiponai.org 

[37] Gillian K Hadfield, Rules for a Flat World, Oxford University Press, 2016

[38] https://www.fda.gov/media/145001/download 

[39] Jack Clark and Gillian K. Hadfield, “Regulatory Markets for AI Safety,” https://arxiv.org/pdf/2001.00078.pdf

[40] https://www.project-sherpa.eu/ai-impact-assessment/;  Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker, “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” https://ainowinstitute.org/aiareport2018.pdf 

[41] https://www.faa.gov/uas/programs_partnerships/test_sites/locations/ 

[42] Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C. Parkes, and Richard Socher, “The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies,” https://arxiv.org/abs/2004.13332 

[43] https://srinstitute.utoronto.ca/news/ai-global-certification-partnership 

 

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.