Skip to main content Skip to secondary navigation

Defining AI

Main content start

[go to the annotated version]

Curiously, the lack of a precise, universally accepted definition of AI probably has helped the field to grow, blossom, and advance at an ever-accelerating pace. Practitioners, researchers, and developers of AI are instead guided by a rough sense of direction and an imperative to “get on with it.” Still, a definition remains important and Nils J. Nilsson has provided a useful one:

“Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”[3]

From this perspective, characterizing AI depends on the credit one is willing to give synthesized software and hardware for functioning “appropriately” and with “foresight.” A simple electronic calculator performs calculations much faster than the human brain, and almost never makes a mistake.[4] Is a calculator intelligent? Like Nilsson, the Study Panel takes a broad view that intelligence lies on a multi-dimensional spectrum. According to this view, the difference between an arithmetic calculator and a human brain is not one of kind, but of scale, speed, degree of autonomy, and generality. The same factors can be used to evaluate every other instance of intelligence—speech recognition software, animal brains, cruise-control systems in cars, Go-playing programs, thermostats—and to place them at some appropriate location in the spectrum.

Although our broad interpretation places the calculator within the intelligence spectrum, such simple devices bear little resemblance to today’s AI. The frontier of AI has moved far ahead and functions of the calculator are only one among the millions that today's smartphones can perform. AI developers now work on improving, generalizing, and scaling up the intelligence currently found on smartphones.

In fact, the field of AI is a continual endeavor to push forward the frontier of machine intelligence. Ironically, AI suffers the perennial fate of losing claim to its acquisitions, which eventually and inevitably get pulled inside the frontier, a repeating pattern known as the “AI effect” or the “odd paradox”—AI brings a new technology into the common fold, people become accustomed to this technology, it stops being considered AI, and newer technology emerges.[5] The same pattern will continue in the future. AI does not “deliver” a life-changing product as a bolt from the blue. Rather, AI technologies continue to get better in a continual, incremental way.

 


[3] Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge, UK: Cambridge University Press, 2010).

[4] Wikimedia Images, accessed August 1, 2016, https://upload.wikimedia.org/wikipedia/commons/b/b6/SHARP_ELSIMATE_EL-W221.jpg.

[5] Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 2nd ed. (Natick, MA: A. K. Peters, Ltd., 2004; San Francisco: W. H. Freeman, 1979), Citations are to the Peters edition.

Cite This Report

Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller.  "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed:  September 6, 2016.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.