Skip to main content Skip to secondary navigation

Public Safety and Security

Main content start

[go to the annotated version]

Cities already have begun to deploy AI technologies for public safety and security. By 2030, the typical North American city will rely heavily upon them. These include cameras for surveillance that can detect anomalies pointing to a possible crime, drones, and predictive policing applications. As with most issues, there are benefits and risks. Gaining public trust is crucial. While there are legitimate concerns that policing that incorporates AI may become overbearing or pervasive in some contexts, the opposite is also possible. AI may enable policing to become more targeted and used only when needed. And assuming careful deployment, AI may also help remove some of the bias inherent in human decision-making.

One of the more successful uses of AI analytics is in detecting white collar crime, such as credit card fraud.[101] Cybersecurity (including spam) is a widely shared concern, and machine learning is making an impact. AI tools may also prove useful in helping police manage crime scenes or search and rescue events by helping commanders prioritize tasks and allocate resources, though these tools are not yet ready for automating such activities. Improvements in machine learning in general, and transfer learning in particular—for speeding up learning in new scenarios based on similarities with past scenarios—may facilitate such systems.

The cameras deployed almost everywhere in the world today tend to be more useful for helping solve crimes than preventing them.[102] [103] This is due to the low quality of event identification from videos and the lack of manpower to look at massive video streams. As AI for this domain improves, it will better assist crime prevention and prosecution through greater accuracy of event classification and efficient automatic processing of video to detect anomalies—including, potentially, evidence of police malpractice. These improvements could lead to even more widespread surveillance. Some cities have already added drones for surveillance purposes, and police use of drones to maintain security of ports, airports, coastal areas, waterways, industrial facilities is likely to increase, raising concerns about privacy, safety, and other issues.

The New York Police Department’s CompStat was the first tool pointing toward predictive policing,[104] and many police departments now use it.[105] Machine learning significantly enhances the ability to predict where and when crimes are more likely to happen and who may commit them. As dramatized in the movie Minority Report, predictive policing tools raise the specter of  innocent people being unjustifiably targeted. But well-deployed AI prediction tools have the potential to actually remove or reduce human bias, rather than reinforcing it, and research and resources should be directed toward ensuring this effect.

AI techniques can be used to develop intelligent simulations for training law-enforcement personnel to collaborate. While international criminal organizations and terrorists from different countries are colluding, police forces from different countries still face difficulty in joining forces to fight them. Training international groups of law enforcement personnel to work as teams is very challenging. The European Union, through the Horizon 2020 program, currently supports such attempts in projects such as LawTrain.[106] The next step will be to move from simulation to actual investigations by providing tools that support such collaborations.

Tools do exist for scanning Twitter and other feeds to look for certain types of events and how they may impact security. For example, AI can help in social network analysis to prevent those at risk from being radicalized by ISIS or other violent groups. Law enforcement agencies are increasingly interested in trying to detect plans for disruptive events from social media, and also to monitor activity at large gatherings of people to analyze security. There is significant work on crowd simulations to determine how crowds can be controlled. At the same time, legitimate concerns have been raised about the potential for law enforcement agencies to overreach and use such tools to violate people’s privacy.

The US Transportation Security Administration (TSA), Coast Guard, and the many other security agencies that currently rely on AI will likely increase their reliance to enable significant efficiency and efficacy improvements.[107] AI techniques—vision, speech analysis, and gait analysis— can aid interviewers, interrogators, and security guards in detecting possible deception and criminal behavior. For example, the TSA currently has an ambitious project to redo airport security nationwide.[108] Called DARMS, the system is designed to improve efficiency and efficacy of airport security by relying on personal information to tailor security based on a person’s risk categorization and the flights being taken. The future vision for this project is a tunnel that checks people’s security while they walk through it. Once again, developers of this technology should be careful to avoid building in bias (e.g. about a person’s risk level category) through use of datasets that reflect prior bias.[109]

 


[101] "RSA Adaptive Authentication," RSA, accessed August 1, 2016, https://www.rsa.com/en-us/products-services/fraud-prevention/adaptive-authentication.

[102] Takeshi Arikuma and Yasunori Mochizuki, "Intelligent multimedia surveillance system for safer cities" APSIPA Transactions on Signal and Information Processing 5 (2016): 1-8.

[103] "Big Op-Ed: Shifting Opinions On Surveillance Cameras,", Talk of the Nation, NPR, April 22, 2013, accessed August 1, 2016, http://www.npr.org/2013/04/22/178436355/big-op-ed-shifting-opinions-on-surveillance-cameras.

[104] Walter L. Perry, Brian McInnis, Carter C. Price, Susan Smith, and John S. Hollywood, “The Role of Crime Forecasting in Law Enforcement Operations,” Rand Corporation Report 233 (2013).

[105] "CompStat," Wikipedia, last modified July 28, 2016, accessed August 1, 2016, https://en.wikipedia.org/wiki/CompStat.

[106] LAW-TRAIN, accessed August 1, 2016, http://www.law-train.eu/.

[107] Milind Tambe, Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned (New York: Cambridge University Press, 2011).

[108] Peter Neffenger, “TSA’s 2017 Budget—A Commitment to Security (Part I),” Department of Homeland Security, March 1, 2016, accessed August 1, 2016, https://www.tsa.gov/news/testimony/2016/03/01/hearing-fy17-budget-request-transportation-security-administration.

[109] Crawford, “AI’s White Guy Problem.”

Cite This Report

Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller.  "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed:  September 6, 2016.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.