Guidelines for the Future
Faced with the profound changes that AI technologies can produce, pressure for “more” and “tougher” regulation is inevitable. Misunderstanding about what AI is and is not, especially against a background of scare-mongering, could fuel opposition to technologies that could benefit everyone. This would be a tragic mistake. Regulation that stifles innovation, or relocates it to other jurisdictions, would be similarly counterproductive.
Fortunately, principles that guide successful regulation of current digital technologies can be instructive. A recent multi-year study comparing privacy regulation in four European countries and the United States, for example, yielded counter-intuitive results. Those countries, such as Spain and France, with strict and detailed regulations bred a “compliance mentality” within corporations, which had the effect of discouraging both innovation and robust privacy protections. Rather than taking responsibility for privacy protection internally and developing a professional staff to foster it in business and manufacturing processes, or engaging with privacy advocates or academics outside their walls, these companies viewed privacy as a compliance activity. Their focus was on avoiding fines or punishments, rather than proactively designing technology and adapting practices to protect privacy.
By contrast, the regulatory environment in the United States and Germany, which combined more ambiguous goals with tough transparency requirements and meaningful enforcement, were more successful in catalyzing companies to view privacy as their responsibility. Broad legal mandates encouraged companies to develop a professional staff and processes to enforce privacy controls, engage with outside stakeholders, and to adapt their practices to technology advances. Requiring greater transparency enabled civil society groups and media to become credible enforcers both in court and in the court of public opinion, making privacy more salient to corporate boards and leading them to further invest in privacy protection.
In AI, too, regulators can strengthen a virtuous cycle of activity involving internal and external accountability, transparency, and professionalization, rather than narrow compliance. As AI is integrated into cities, it will continue to challenge existing protections for values such as privacy and accountability. Like other technologies, AI has the potential to be used for good or nefarious purposes. This report has tried to highlight the potential for both. A vigorous and informed debate about how to best steer AI in ways that enrich our lives and our society, while encouraging creativity in the field, is an urgent and vital need. Policies should be evaluated as to whether they democratically foster the development and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few. And since future AI technologies and their effects cannot be foreseen with perfect clarity, policies will need to be continually re-evaluated in the context of observed societal challenges and evidence from fielded systems.
As this report documents, significant AI-related advances have already had an impact on North American cities over the past fifteen years, and even more substantial developments will occur over the next fifteen. Recent advances are largely due to the growth and analysis of large data sets enabled by the Internet, advances in sensory technologies and, more recently, applications of “deep learning.” In the coming years, as the public encounters new AI applications in domains such as transportation and healthcare, they must be introduced in ways that build trust and understanding, and respect human and civil rights. While encouraging innovation, policies and processes should address ethical, privacy, and security implications, and should work to ensure that the benefits of AI technologies will be spread broadly and fairly. Doing so will be critical if Artificial Intelligence research and its applications are to exert a positive influence on North American urban life in 2030 and beyond.
 Kenneth A. Bamberger and Deirdre K. Mulligan, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe (Cambridge, Massachusetts: MIT Press, 2015).
Cite This Report
Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller. "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed: September 6, 2016.
AI100 Standing Committee and Study Panel
© 2016 by Stanford University. Artificial Intelligence and Life in 2030 is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.