Skip to main content Skip to secondary navigation

AI Policy, Now and in the Future (Annotated)

Main content start

[go back to the original version]

Throughout history, humans have both shaped and adapted to new technologies. This report anticipates that advances in AI technologies will be developed and fielded gradually—not in sudden, unexpected jumps in the techniques themselves—and will build on what exists today, making this adaptation easier. On the other hand, small improvements to techniques, computing power, or availability of data can occasionally lead to novel, game-changing applications. The measure of success for AI applications is the value they create for human lives. Going forward, the ease with which people use and adapt to AI applications will likewise largely determine their success.

Conversely, since AI applications are susceptible to errors and failures, a mark of their success will be how users perceive and tolerate their shortcomings. As AI becomes increasingly embedded in daily lives and used for more critical tasks, system mistakes may lead to backlash from users and negatively affect their trust. Though accidents in a self-driving car may be less probable than those driven by humans, for example, they will attract more attention. Design strategies that enhance the ability of humans to understand AI systems and decisions (such as explicitly explaining those decisions), and to participate in their use, may help build trust and prevent drastic failures. Likewise, developers should help manage people’s expectations, which will affect their happiness and satisfaction with AI applications. Frustration in carrying out functions promised by a system diminishes people’s trust and reduces their willingness to use the system in the future.

Another important consideration is how AI systems that take over certain tasks will affect people’s affordances and capabilities. As machines deliver super-human performances on some tasks, people’s ability to perform them may wither. Already, introducing calculators to classrooms has reduced children's ability to do basic arithmetic operations. Still, humans and AI systems have complementary abilities. People are likely to focus on tasks that machines cannot do as well, including complex reasoning and creative expression.

Already, children are increasingly exposed to AI applications, such as interacting with personal assistants on cell phones or with virtual agents in theme parks. Having early exposure will improve children’s interactions with AI applications, which will become a natural part of their daily lives. As a result, gaps will appear in how younger and older generations perceive AI’s influences on society.

Likewise, AI could widen existing inequalities of opportunity if access to AI technologies—along with the high-powered computation and large-scale data that fuel many of them—is unfairly distributed across society. These technologies will improve the abilities and efficiency of people who have access to them. A person with access to accurate Machine Translation technology will be better able to use learning resources available in different languages. Similarly, if speech translation technology is only available in English, people who do not speak English will be at a disadvantage.

Further, AI applications and the data they rely upon may reflect the biases of their designers and users, who specify the data sources. This threatens to deepen existing social biases, and concentrate AI’s benefits unequally among different subgroups of society. For example, some speech recognition technologies do not work well for women and people with accents. As AI is increasingly used in critical applications, these biases may surface issues of fairness to diverse groups in society. On the other hand, compared to the well-documented biases in human decision-making, AI-based decision-making tools have the potential to significantly reduce the bias in critical decisions such as who is lent money or sent to jail.

Privacy concerns about AI-enabled surveillance are also widespread, particularly in cities with pervasive instrumentation. Sousveillance, the recording of an activity by a participant, usually with portable personal devices, has increased as well. Since views about bias and privacy are based on personal and societal ethical and value judgments, the debates over how to address these concerns will likely grow and resist quick resolution. Similarly, since AI is generating significant wealth, debates will grow regarding how the economic fruits of AI technologies should be shared—especially as AI expertise and the underlying data sets that fuel applications are concentrated in a small number of large corporations.

To help address these concerns about the individual and societal implications of rapidly evolving AI technologies, the Study Panel offers three general policy recommendations:

   1. Define a path toward accruing technical expertise in AI at all levels of government. Effective governance requires more experts who understand and can analyze the interactions between AI technologies, programmatic objectives, and overall societal values.

​Absent sufficient technical expertise to assess safety or other metrics, national or local officials may refuse to permit a potentially promising application. On the other hand, insufficiently trained officials may simply take the word of industry technologists and green light a sensitive application that has been insufficiently vetted. Without an understanding of how AI systems interact with human behavior and societal values, officials will be poorly positioned to evaluate the impact of AI on programmatic objectives.

   2. Remove the perceived and actual impediments to research on the fairness, security, privacy, and social impacts of AI systems.

Some interpretations of federal laws such as the Computer Fraud and Abuse Act and the anti-circumvention provision of the Digital Millennium Copyright Act are ambiguous regarding whether and how proprietary AI systems may be reverse engineered and evaluated by academics, journalists, and other researchers. Such research is critical if AI systems with physical and other material consequences are to be properly vetted and held accountable.

   3. Increase public and private funding for interdisciplinary studies of the societal impacts of AI.

As a society, we are underinvesting resources in research on the societal implications of AI technologies. Private and public dollars should be directed toward interdisciplinary teams capable of analyzing AI from multiple angles. Research questions range from basic research into intelligence to methods to assess and affect the safety, privacy, fairness, and other impacts of AI.

Questions include: Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from unlawful discrimination? Who should reap the gains of efficiencies enabled by AI technologies and what protections should be afforded to people whose skills are rendered obsolete? As AI becomes integrated more broadly and deeply into industrial and consumer products, it enters areas in which established regulatory regimes will need to be adapted to AI innovations or in some cases fundamentally reconfigured according to broadly accepted goals and principles.

The approach in the United States to date has been sector-specific, with oversight by a variety of agencies. The use of AI in devices that deliver medical diagnostics and treatments is subject to aggressive regulation by the Food and Drug Administration (FDA), both in defining what the product is and specifying the methods by which it is produced, including standards of software engineering. The use of drones in regulated airspace falls under the authority of the Federal Aviation Administration (FAA).[126] For consumer-facing AI systems, regulation by the Federal Trade Commission (FTC) comes into play. Financial markets using AI technologies, such as in high-frequency trading, come under regulation by the Security Exchange Commission (SEC).

In addition to sector-specific approaches, the somewhat ambiguous and broad regulatory category of “critical infrastructure” may apply to AI applications.[127] The Obama Administration’s Presidential Policy Directive (PPD) 21 broadly defines critical infrastructure as composed of “the assets, systems, and networks, whether physical or virtual, so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof.” Today, an enterprise does not come under federal regulation solely by falling under that broad definition. Instead, the general trend of federal policy is to seek regulation in sixteen sectors of the economy.[128]

As regards AI, critical infrastructure is notably defined by the end-user application, and not the technology or sector that actually produces AI software.[129] Software companies such as Google, Facebook, and Amazon have actively lobbied to avoid being designated as critical to the economy, arguing that this would open the door to regulation that would inevitably compromise their rapid product development cycles and ability to innovate.[130] Nonetheless, as the companies creating, operating, and maintaining critical infrastructure use AI, interest will grow in regulating that software.

Some existing regulatory regimes for software safety (for example, the FDA's regulation of high consequence medical software) require specific software engineering practices at the developer level. However, modern software systems are often assembled from library components which may be supplied by multiple vendors, and are relatively application-independent. It doesn’t seem feasible or desirable to subject all such developers to the standards required for the most critical, rare applications. Nor does it seem advisable to allow unregulated use of such components in safety critical applications. Tradeoffs between promoting innovation and regulating for safety are difficult ones, both conceptually and in practice. At a minimum, regulatory entities will require greater expertise going forward in order to understand the implications of standards and measures put in place by researchers, government, and industry.[131]


[126] FAA controls the ways drones fly, requires drones to be semiautonomous as opposed to autonomous, requires visual connection to the drone, and enforces no-fly zones close to airports.

[127] “Presidential Policy Directive (PPD-21) -- Critical Infrastructure Security and Resilience,” The White House, February 12, 2013, accessed August 1, 2016,

[128] PPD 21 identifies agencies responsible in each case. Chemical: Department of Homeland Security; Commercial Facilities: Department of Homeland Security; Communications: Department of Homeland Security; Critical Manufacturing: Department of Homeland Security; Dams: Department of Homeland Security; Defense Industrial Base: Department of Defense; Emergency Services: Department of Homeland Security; Energy: Department of Energy; Financial Services: Department of the Treasury; Food and Agriculture: U.S. Department of Agriculture and Department of Health and Human Services; Government Facilities: Department of Homeland Security and General Services Administration; Healthcare and Public Health: Department of Health and Human Services; Information Technology: Department of Homeland Security; Nuclear Reactors, Materials, and Waste: Department of Homeland Security; Transportation Systems: Department of Homeland Security and Department of Transportation; Water and Wastewater Systems: Environmental Protection Agency.

[129] In “ICYMI- Business Groups Urge White House to Rethink Cyber Security Order,” Internet Association, March 5, 2013, accessed August 1, 2016, “Obama's Feb. 12 order says the government can't designate ‘commercial information technology products’ or consumer information technology services as critical U.S. infrastructure targeted for voluntary computer security standards,’ ... ‘Obama's order isn't meant to get down to the level of products and services and dictate how those products and services behave,’ said David LeDuc, senior director of public policy for the Software & Information Industry Association, a Washington trade group that lobbied for the exclusions.”

[130] Eric Engleman, “Google Exception in Obama’s Cyber Order Questioned as Unwise Gap,” Bloomberg Technology, March 4, 2013, accessed August 1, 2016,

[131] Ryan Calo, “The Case for a Federal Robotics Commission,” Brookings Report, September 15, 2014, accessed August 1, 2016,

Cite This Report

Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller.  "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc: Accessed:  September 6, 2016.

Report Authors

AI100 Standing Committee and Study Panel 


© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International):