Skip to main content Skip to secondary navigation

SQ4. How much have we progressed in understanding the key mysteries of human intelligence?

Main content start

AI, the study of how to build an intelligent machine, and cognitive science, the study of human intelligence, have been evolving in complementary ways. A view of human intelligence that has gained prominence over the last five years holds that it is collective—that individuals are just one cog in a larger intellectual machine. Their roles in that collective are likely to be different than the roles of machines, because their strengths are different. Humans are able to share intentionality with other humans—to pursue common goals as a team—and doing so may require having features that are uniquely human: a certain kind of biological hardware with its associated needs and a certain kind of cultural experience. In contrast, machines have vast storehouses of data, along with the ability to process it and communicate with other machines at remarkable speeds. In that sense, AI is developing in ways that improve its ability to collaborate with and support people, rather than in ways that mimic human intelligence. Still, there are remarkable parallels between the operation of individual human minds and that of deep learning machines. AI seems to be evolving in a way that adopts some core human features—specifically, those that relate to perception and memory.

In the early days of AI, a traditional view of human intelligence understood it as a distinct capacity of human cognition, related to a person’s ability to process information. Intelligence was a property that could be measured in any sufficiently complex cognitive system, and individuals differed in their mental horsepower. Today, this view is rare among cognitive scientists and others who study human intelligence. The key mysteries of human intelligence that have come to concern researchers for more than a decade include questions not only about how people are able to interpret complex inputs, solve difficult problems, and make reasonable judgments and decisions quickly, but also how we are able to negotiate emotionally nuanced relationships, use attitudes and emotions and other bodily signals to guide our decision-making, and understand other people’s intentions.1 The study of intelligence has become the study of how people are able to adapt and succeed, not just how an impressive information-processing system works.

The modern study of human intelligence draws on a variety of forms of evidence. Cognitive psychology uses experimental studies of cognitive performance to look at the nature of human cognition and its capabilities. Collective intelligence is the study of how intelligence is designed for and emerges from group (rather than individual) activity. Psychometrics is the study of how people vary in what they can do, how their capabilities are determined, and how abilities relate to demographic variables. Cognitive neuroscience looks at how the brain’s hardware is involved in implementing psychological and social processes. In the context of cognitive science, artificial intelligence is concerned with how advances in automating skills associated with humans provide proofs-of-concept about how humans might go about doing the same things. 

Developments in human-intelligence research in the last five years have been inspired more by collective intelligence,2 cognitive neuroscience,3 and artificial intelligence4 than by cognitive psychology or psychometrics. The study of working memory, attention, and executive processing in cognitive psychology, once understood as the mental components supporting intelligence, have become central topics in the study of cognitive neuroscience.5 Psychometric work on intelligence itself has splintered, due to the recognition that a single “intelligence” dimension like IQ does not adequately characterize human problem-solving potential.6 Abilities like empathy, impulse control, and storytelling turn out to be just as important. Over the past half decade, major shifts in the understanding of human intelligence have favored the topics discussed below.

Collective Intelligence

Research from a variety of fields reinforces the view that intelligence is a property not only of individuals, but also of collectives.7 As we know from work on the wisdom of crowds,8 collectives can be surprisingly insightful, especially when many individuals with relevant knowledge make independent contributions, unaffected by pressures to conform to group norms. Deliberating groups can also exhibit greater intelligence than individuals, especially when they follow norms that encourage challenge and constructive criticism. 

Intelligence is a group property in the sense that the quality of a group’s performance does not depend on the IQs of the individual members of the group.9 It is easier to predict group performance if you know how good the group is at turn-taking or how empathetic the members are than if you know the IQs of group members. Research on children shows they are sensitive to what others know when deciding whose advice to take.10

Studies of social networks have shown the role of collective intelligence in determining individual beliefs. Some of those studies help explain the distribution of beliefs across society, showing that patterns of message transmission in social networks can account for both broad acceptance of beliefs endorsed by science and simultaneous minority acceptance of conspiracy theories.11 Such studies also offer a window into political polarization by showing that even a collection of rational decision-makers can end up splitting into incompatible subgroups under the influence of information bubbles.12

Chart
AI research on cooperation lags behind that of competition. Recently, the community has begun to invest more attention in cooperative games like Hanabi, shown here. Researchers at Facebook AI Research have shown that a combination of deep reinforcement learning and a “theory of mind”-like search procedure could achieve state-of-the-art performance in this cooperative game. It remains to be seen whether AI strategies learned from AI-AI cooperation transfer to AI-human scenarios. From: https://ai.facebook.com/blog/building-ai-that-can-master-complex-cooper…

In the most general sense, the research community is starting to see the mind as a collective entity spread across members of a group. People obviously have skills that they engage in as individuals, but the majority of knowledge that allows them to operate day by day sits in the heads of other members of their community.13 Our sense of understanding is affected by what others know, and we rely on others for the arguments that constitute our explanations, often without knowing that we are doing so. For instance, we might believe we understand the motivation for a health policy (wear a mask in public!) but actually we rely on experts or the internet to spell it out. We suppose we understand how everyday objects like toilets work—and discover our ignorance when we try to explain their mechanism,14 or when they break. At a broader level, our communities determine our political and social beliefs and attitudes. Political partisanship influences many beliefs and actions,15 some that have nothing to do with politics,16 even some related to life and death.17

Cognitive Neuroscience

Work in cognitive neuroscience has started to productively examine a variety of higher-level skills associated with the more traditional view of intelligence. Three partially competing ideas have reached some consensus over the last few years.

First, a pillar of cognitive neuroscience is that properties of individuals such as working memory and executive control are central to domain-independent intelligence, that which governs performance on all cognitive tasks regardless of their mode or topic. A common view is that this sort of intelligence is governed by neural speed.18 But there is increasing recognition that what matters is not global neural speed per se, but the efficiency of higher-order processing. Efficiency is influenced not just by speed, but by how processing is organized.19

A second idea gaining support is that higher-ability individuals are characterized by more efficient patterns of brain connectivity. Both of these ideas are consistent with the dominant view that intelligence is associated with higher-level brain areas in the parieto-frontal cortex.

The third idea is more radical. It suggests that the neural correlates of intelligence are distributed throughout the brain.20 In this view, the paramount feature of human intelligence is flexibility, the ability to continually update prior knowledge and to generate predictions. Intelligence derives from the brain’s ability to dynamically generate inferences that anticipate sensory inputs. This flexibility is realized as brain plasticity—the ability to change—housed in neural connections that exhibit what network scientists call a “small-world” pattern, where the brain balances relatively focal, densely interconnected, functional centers with long-range connections that allow for more global integration of information.

Cognitive neuroscience has taken a step in the direction of collective cognition via a sub-discipline called social neuroscience. Its motivation is the recognition that one of the brain’s unique and most important capacities is its ability to grasp what others are thinking and feeling. The field has thus focused on issues that are old stalwarts of social psychology—fairness, punishment, and people’s tendency to cooperate versus compete—and on identifying hormones and brain networks that are involved in these activities. Unlike other branches of cognitive neuroscience, social neuroscience recognizes that human cognitive, emotional, and behavioral processes have been shaped by our social environments.

A corollary of developments in cognitive neuroscience is the growth of the related field of computational neuroscience, which brings a computational perspective to the investigation of brains and minds. This field has been aided tremendously by the machine-learning paradigm known as reinforcement learning, which is concerned with learning from evaluative feedback—reward and punishment.21 It has proven to be a goldmine of ideas for understanding learning in the brain, since each element of the computational theory can be linked to processes at the cellular level. For instance, there is now broad consensus about the central role of the dopamine system in learning, decision-making, motivation, prediction, motor control, habit, and addiction.22

Computational Modeling

For decades now, trends in computational modeling of cognition have followed a recurring pattern, cycling between a primary focus on logic (symbolic reasoning) and on pattern recognition (neural networks). In the past five to 10 years, neural net models have been in the spotlight—due in small part to the success of computational neuroscience and in large part to the success of deep learning in AI. The computational modeling field is now full of deep-learning-inspired models of visual recognition, language processing, and other cognitive activities. There remains a fair amount of excitement about Bayesian modeling—a type of logic infused with probabilities. But the clash with deep learning techniques has stirred a heated debate. Is it better to make highly accurate predictions without understanding exactly why, or better to make less accurate predictions but with a clear logic behind them?23 We expect this debate will be further explored in future AI100 reports.

Beyond efforts to build computational models, deep learning models have become central methodological weapons in the cognitive science arsenal. They are the state-of-the-art tools for classification, helping experimentalists to quickly construct large stimulus sets for experiments and analysis. Moreover, huge networks trained on enormous quantities of data, such as GPT-3 and Grover, have opened new territory for the study of language and discourse at multiple levels.

The State of the Art

The nature of consciousness remains an open question. Some see progress;24 others believe we are no further along in understanding how to build a conscious agent than we were 46 years ago, when the philosopher Thomas Nagel famously posed the question, “What is it like to be a bat?”25 It is not even clear that understanding consciousness is necessary for understanding human intelligence. The question has become less pressing for this purpose as we have begun to recognize the limits of conscious information processing in human cognition,26 and as our models become increasingly based on emergent processes instead of central design.

Cognitive models motivate an analysis of how people integrate information from multiple modalities, multiple senses, and multiple sources: our brains, our physical bodies, physical objects (pen, paper, computers), and social entities (other people, Wikipedia). Although there is now a lot of evidence that it is the ability to do this integration that supports humanity’s more remarkable achievements, how we do so remains largely mysterious. Relatedly, there is increased recognition of the importance of processes that support intentional action, shared intentionality, free will, and agency. But there has been little fundamental progress on building rigorous models of these processes.

The cognitive sciences continue to search for a paradigm for studying human intelligence that will endure. Still, the search is uncovering critical perspectives—like collective cognition—and methodologies that will shape future progress, like cognitive neuroscience and the latest trends in computational modeling. These insights seem essential in our quest for building machines that we would truly judge as “intelligent.”


[1] Robert J. Sternberg and Scott Barry Kaufman (Eds.), The Cambridge Handbook of Intelligence, Cambridge University Press, 2011. 

[2] Thomas W. Malone and Michael S. Bernstein (Eds.), Handbook of Collective Intelligence. MIT Press, 2015.

[3]  Aron K. Barbey, Sherif Karama, and Richard J. Haier (Eds.), The Cambridge Handbook of Intelligence and Cognitive Neuroscience, Cambridge University Press, 2021.

[4] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep learning,” Nature, issue 521, pages 436–444: May 28, 2015  ​​https://www.nature.com/articles/nature14539

[5] Mark D'Esposito and Bradley R. Postle, “The Cognitive Neuroscience of Working Memory,” Annual Review of Psychology, Vol. 66, pages 115–142  https://www.annualreviews.org/doi/abs/10.1146/annurev-psych-010814-015031 

[6] Robert J. Sternberg and Scott Barry Kaufman (Eds.), The Cambridge Handbook of Intelligence, Cambridge University Press, 2011. 

[7] Steven Sloman and Philip Fernbach, The Knowledge Illusion, Riverhead Books, 2018. 

[8] Albert E. Mannes, Richard. P. Larrick, and Jack B. Soll, “The social psychology of the wisdom of crowds,” in J. I. Krueger (Ed.), Social judgment and decision making, Psychology Press, 2012. https://psycnet.apa.org/record/2011-26824-013

[9] Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, And Thomas W. Malone, “Evidence for a Collective Intelligence Factor in the Performance of Human Groups,” Science, Vol. 330, Issue 6004, pages 686–688: Oct. 29, 2010. https://science.sciencemag.org/content/330/6004/686 

[10] Cecilia Heyes, “Who Knows? Metacognitive Social Learning Strategies,” Trends in Cognitive Sciences, Volume 20, Issue 3, Pages 204-213: March 2016. https://www.sciencedirect.com/science/article/pii/S1364661315003125 

[11] Cailin O'Connor and James Owen Weatherall, The Misinformation Age, Yale University Press, 2019.

[12] Jens Koed Madsen, Richard M. Bailey, and Toby D. Pilditch, “Large networks of rational agents form persistent echo chambers,” Scientific Reports 8, Article Number 12391, 2018. https://www.nature.com/articles/s41598-018-25558-7 

[13] Steven Sloman and Philip Fernbach, The Knowledge Illusion, Riverhead Books, 2018.

[14] Leonid Rozenblit, and Frank Keil. “The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth,” Cognitive Science, Vol. 26 (5), Pages 521–562, 2002. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3062901/

[15] Phillip J. Ehret, Leaf Van Boven, and David K. Sherman, “Partisan Barriers to Bipartisanship: Understanding Climate Policy Polarization,” Social Psychological and Personality Science, Volume 9, Issue 3, Pages 308–318: April 2018. https://journals.sagepub.com/doi/full/10.1177/1948550618758709

[16] Joseph Marks, Eloise Copland, Eleanor Loh, Cass R. Sunstein, and Tali Sharot, “Epistemic spillovers: Learning others’ political views reduces the ability to assess and use their expertise in nonpolitical domains,” Cognition, Volume 188, Pages 74–84: July 2019. https://www.sciencedirect.com/science/article/pii/S0010027718302609 

[17] Mae K. Fullerton, Nathaniel Rabb,  Sahit Mamidipaka, Lyle Ungar, Steven A. Sloman, “Evidence against risk as a motivating driver of COVID-19 preventive behaviors in the United States,” Journal of Health Psychology, June 2021. https://journals.sagepub.com/doi/10.1177/13591053211024726 

[18] Anna-Lena Schubert, Dirk Hagemann, and Gidon T. Frischkorn, “Is General Intelligence Little More Than The Speed Of Higher-order Processing?” Journal of Experimental Psychology: General, 146 (10), Pages 1498–1512, 2017. https://psycnet.apa.org/record/2017-30267-001

 [19] Ibid.

[20] Aron K. Barbey, “Network Neuroscience Theory of Human Intelligence,” Trends in Cognitive Sciences, 22(1), Pages 8–20, 2018.  https://psycnet.apa.org/record/2017-57554-004

[21] Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An Introduction, 2nd Ed., MIT Press, 2018. http://incompleteideas.net/book/the-book.html 

[22] Maria K. Eckstein, Linda Wilbrecht, and Anne GE Collins, “What Do Reinforcement Learning Models Measure? Interpreting Model Parameters In Cognition And Neuroscience,” Current Opinion in Behavioral Sciences, Volume 41, Pages 128–137, October 2021. https://www.sciencedirect.com/science/article/pii/S2352154621001236 

[23] Brendan Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman, “Building Machines That Learn And Think Like People,” Behavioral and Brain Sciences, 40, E253, 2017. https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/building-machines-that-learn-and-think-like-people/A9535B1D745A0377E16C590E14B94993 

[24] Giulio Tononi, Melanie Boly, Marcello Massimini, and Christof Koch, “Integrated Information Theory: From Consciousness To Its Physical Substrate,” Nature Reviews Neuroscience 17, Pages 450–461, 2016. https://www.nature.com/articles/nrn-2016-44 

[25] Michael A. Cerullo, “The Problem with Phi: A Critique of Integrated Information Theory,” PLoS Computational Biology 11 (9): September 2015. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004286 

[26] Matthieu Raoelison, Esther Boissin, Grégoire Borst, and Wim De Neys, “From Slow To Fast Logic: The Development Of Logical Intuitions,” Thinking & Reasoning, 2021. doi.org/10.1080/13546783.2021.1885488

 

 

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.