Skip to main content Skip to secondary navigation

Standing Questions and Responses

Main content start

SQ1. What are some examples of pictures that reflect important progress in AI and its influences?

One picture appears in each of the sections that follow.

View the full Study Panel response to SQ1 

SQ2. What are the most important advances in AI?

Summary: People are using AI more today to dictate to their phone, get recommendations, enhance their backgrounds on conference calls, and much more. Machine-learning technologies have moved from the academic realm into the real world in a multitude of ways. Neural network language models learn about how words are used by identifying patterns in naturally occurring text, supporting applications such as machine translation, text classification, speech recognition, writing aids, and chatbots. Image-processing technology is now widespread, but applications such as creating photo-realistic pictures of people and recognizing faces are seeing a backlash worldwide. During 2020, robotics development was driven in part by the need to support social distancing during the COVID-19 pandemic. Predicted rapid progress in fully autonomous driving failed to materialize, but autonomous vehicles have begun operating in selected locales. AI tools now exist for identifying a variety of eye and skin disorders, detecting cancers, and supporting measurements needed for clinical diagnosis. For financial institutions, uses of AI are going beyond detecting fraud and enhancing cybersecurity to automating legal and compliance documentation and detecting money laundering. Recommender systems now have a dramatic influence on people’s consumption of products, services, and content, but they raise significant ethical concerns.

Read the full Study Panel response to SQ2

SQ3. What are the most inspiring open grand challenge problems?

Summary: Recent years have seen remarkable progress on some of the challenge problems that help drive AI research, such as answering questions based on reading a textbook, helping people drive so as to avoid accidents, and translating speech in real time. Others, like making independent mathematical discoveries, have remained open. A lesson learned from social science- and humanities-inspired research over the past five years is that AI research that is overly tuned to concrete benchmarks can take us further away from the goal of cooperative and well-aligned AI that serves humans’ needs, goals, and values. A number of broader challenges should be kept in mind: exhibiting greater generalizability, detecting and using causality, and noticing and exhibiting normativity are three particularly important ones. An overarching and inspiring challenge that brings many of these ideas together is to build machines that can cooperate and collaborate seamlessly with humans and can make decisions that are aligned with fluid and complex human values and preferences. 

Read the full Study Panel response to SQ3

SQ4. How much have we progressed in understanding the key mysteries of human intelligence?

Summary: A view of human intelligence that has gained prominence over the last five years holds that it is collective—that individuals are just one cog in a larger intellectual machine. AI is developing in ways that improve its ability to collaborate with and support people, rather than in ways that mimic human intelligence. The study of intelligence has become the study of how people are able to adapt and succeed, not just how an impressive information-processing system works.  Over the past half decade, major shifts in the understanding of human intelligence have favored three topics: collective intelligence, the view that intelligence is a property not only of individuals, but also of collectives; cognitive neuroscience, studying how the brain’s hardware is involved in implementing psychological and social processes; and computational modeling, which is now full of machine-learning-inspired models of visual recognition, language processing, and other cognitive activities. The nature of consciousness and how people integrate information from multiple modalities, multiple senses, and multiple sources remain largely mysterious. Insights in these areas seem essential in our quest for building machines that we would truly judge as “intelligent". 

Read the full Study Panel response to SQ4

SQ5. What are the prospects for more general artificial intelligence?

Summary: The field is still far from producing fully general AI systems. However, in the last few years, important progress has been made in the form of three key capabilities. First is the ability for a system to learn in a self-supervised or self-motivated way. A self-supervised model called transformers has become the go-to approach for natural language processing, and has been used in diverse applications, including machine translation and Google web search. Second is the ability for a single AI system to learn in a continual way to solve problems from many different domains without requiring extensive retraining for each. One influential approach is to train a deep neural network on a variety of tasks, where the objective is for the network to learn general-purpose, transferable representations, as opposed to representations tailored specifically to any particular task. Third is the ability for an AI system to generalize between tasks—that is, to adapt the knowledge and skills the system has acquired for one task to new situations. A promising direction is the use of intrinsic motivation, in which an agent is rewarded for exploring new areas of the problem space. AI systems will likely remain very far from human abilities, however, without being more tightly coupled to the physical world. 

Read the full Study Panel response to SQ5

SQ6. How has public sentiment towards AI evolved, and how should we inform/educate the public?

Summary: Over the last few years, AI and related topics have gained traction in the zeitgeist. In the 2017–18 session of the US Congress, for instance, mentions of AI-related words were more than ten times higher than in previous sessions. Media coverage of AI often distorts and exaggerates AI’s potential at both the positive and negative extremes, but it has helped to raise public awareness of legitimate concerns about AI bias, lack of transparency and accountability, and the potential of AI-driven automation to contribute to rising inequality. Governments, universities, and nonprofits are attempting to broaden the reach of AI education, including investing in new AI-related curricula. Nuanced views of AI as a human responsibility are growing, including an increasing effort to engage with ethical considerations. Broad international movements in Europe, the US, China, and the UK have been pushing back against the indiscriminate use of facial-recognition systems on the general public. More public outreach from AI scientists would be beneficial as society grapples with the impacts of these technologies. It is important that the AI research community move beyond the goal of educating or talking to the public and toward more participatory engagement and conversation with the public. 

Read the full Study Panel response to SQ6

SQ7. How should governments act to ensure AI is developed and used responsibly?

Summary: Since the publication of the last AI100 report just five years ago, over 60 countries have engaged in national AI initiatives, and several significant new multilateral efforts are aimed at spurring effective international cooperation on related topics. To date, few countries have moved definitively to regulate AI specifically, outside of rules directly related to the use of data. As of 2020, 24 countries had opted for permissive laws to allow autonomous vehicles to operate in limited settings. As yet, only Belgium has enacted laws on the use of autonomous lethal weapons. The oversight of social media platforms has become a hotly debated issue worldwide. Cooperative efforts among countries have also emerged in the last several years. Appropriately addressing the risks of AI applications will inevitably involve adapting regulatory and policy systems to be more responsive to the rapidly advancing pace of technology development. Researchers, professional organizations, and governments have begun development of AI or algorithm impact assessments (akin to the use of environmental impact assessments before beginning new engineering projects). 

Read the full Study Panel response to SQ7

SQ8. What should the roles of academia and industry be, respectively, in the development and deployment of AI technologies and the study of the impacts of AI?

Summary: As AI takes on added importance across most of society, there is potential for conflict between the private and public sectors regarding the development, deployment, and oversight of AI technologies. The commercial sector continues to lead in AI investment, and many researchers are opting out of academia for full-time roles in industry. The presence and influence of industry-led research at AI conferences has increased dramatically, raising concerns that published research is becoming more applied and that topics that might run counter to commercial interests will be underexplored. As student interest in computer science and AI continues to grow, more universities are developing standalone AI/machine-learning educational programs. Company-led courses are becoming increasingly common and can fill curricular gaps.  Studying and assessing the societal impacts of AI, such as concerns about the potential for AI and machine-learning algorithms to shape polarization by influencing content consumption and user interactions, is easiest when academic-industry collaborations facilitate access to data and platforms. 

Read the full Study Panel response to SQ8

SQ9. What are the most promising opportunities for AI?

Summary: AI approaches that augment human capabilities can be very valuable in situations where humans and AI have complementary strengths. An AI system might be better at synthesizing available data and making decisions in well-characterized parts of a problem, while a human may be better at understanding the implications of the data. It is becoming increasingly clear that all stakeholders need to be involved in the design of AI assistants to produce a human-AI team that outperforms either alone. AI software can also function autonomously, which is helpful when large amounts of data needs to be examined and acted upon. Summarization and interactive chat technologies have great potential. As AI becomes more applicable in lower-data regimes, predictions can increase the economic efficiency of everyday users by helping people and businesses find relevant opportunities, goods, and services, matching producers and consumers. We expect many mundane and potentially dangerous tasks to be taken over by AI systems in the near future. In most cases, the main factors holding back these applications are not in the algorithms themselves, but in the collection and organization of appropriate data and the effective integration of these algorithms into their broader sociotechnical systems. 

Read the full Study Panel response to SQ9

SQ10. What are the most pressing dangers of AI?

Summary: As AI systems prove to be increasingly beneficial in real-world applications, they have broadened their reach, causing risks of misuse, overuse, and explicit abuse to proliferate. One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is merely a tool. There is an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination. Without transparency concerning either the data or the AI algorithms that interpret it, the public may be left in the dark as to how decisions that materially impact their lives are being made. AI systems are being used in service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. Insufficient thought given to the human factors of AI integration has led to oscillation between mistrust of the system and over-reliance on the system. AI algorithms are playing a role in decisions concerning distributing organs, vaccines, and other elements of healthcare, meaning these approaches have literal life-and-death stakes. 

Read the full Study Panel response to SQ10

SQ11. How has AI impacted socioeconomic relationships?

Summary: Though characterized by some as a key to increasing material prosperity for human society, AI’s potential to replicate human labor at a lower cost has also raised concerns about its impact on the welfare of workers. To date, AI has not been responsible for large aggregate economic effects. But that may be because its impact is still relatively localized to narrow parts of the economy. In the grand scheme of rising inequality, AI has thus far played a very small role. The first reason, most importantly, is that the bulk of the increase in economic inequality across many countries predates significant commercial use of AI. Since these technologies might be adopted by firms simply to redistribute surplus/gains to their owners, AI could have a big impact on inequality in the labor market and economy without registering any impact on productivity growth. No evidence of such a trend is yet apparent, but it may become so in the future and is worth watching closely. To date, the economic significance of AI has been comparatively small—particularly relative to expectations, among both optimists and pessimists. Other forces—globalization, the business cycle, and a pandemic—have had a much, much bigger and more intense impact in recent decades. But if policymakers underreact to coming changes, innovations may simply result in a pie that is sliced ever more unequally. 

Read the full Study Panel response to SQ11

SQ12. Does it appear “building in how we think” works as an engineering strategy in the long run?

Summary: AI has its own fundamental nature-versus-nurture-like question. Should we attack new challenges by applying general-purpose problem-solving methods, or is it better to write specialized algorithms, designed by experts, for each particular problem? Roughly, are specific AI solutions better engineered in advance by people (nature) or learned by the machine from data (nurture)? The pendulum has swung back and forth multiple times in the history of the field. In the 2010s, the addition of big data and faster processors allowed general-purpose methods like deep learning to outperform specialized hand-tuned methods. But now, in the 2020s, these general methods are running into limits—available computation, model size, sustainability, availability of data, brittleness, and a lack of semantics—that are starting to drive researchers back into designing specialized components of their systems to try to work around them. Indeed, even machine-learning systems benefit from designers using the right architecture for the right job. The recent dominance of deep learning may be coming to an end. To continue making progress, AI researchers will likely need to embrace both general- and special-purpose hand-coded methods, as well as ever faster processors and bigger data. 

Read the full Study Panel response to SQ12

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.