Numbers, Facts and Trends Shaping Your World

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

5. About this canvassing of experts

This report is the second of two reports issued in 2021 that share results from the 12thFuture of the Internet” canvassing by the Pew Research Center and Elon University’s Imagining the Internet Center. The first report examined the “new normal” for digital life that could exist in 2025 in the wake of the outbreak of the global pandemic and other crises in 2020.

For this report, experts were asked to respond to several questions about the future of ethical artificial intelligence via a web-based instrument that was open to them from June 30-July 27. In all, 602 people responded after invitations were emailed to more than 10,000 experts and members of the interested public. The results published here come from a nonscientific, nonrandom, opt-in sample and are not projectable to any other population other than the individuals expressing their points of view in this sample.

Respondent answers were solicited though the following prompts:

Application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues, including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts.

The question on the future of ethical AI: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?

-YES, ethical principles focused primarily on the public good WILL be employed in most AI systems by 2030

-NO, ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030

Follow-up question on ethical AI, seeking a written elaboration on the previous question: Will AI mostly be used in ethical or questionable ways in the next decade? Why? What gives you the most hope? What worries you the most? How do you see AI applications making a difference in the lives of most people? As you look at the global competition over AI systems, what issues concern you or excite you?

Results for the quantitative question regarding how widely deployed ethical AI systems will be in 2030:

  • 32% said YES, ethical principles focused primarily on the public good WILL be employed in most AI systems by 2030
  • 68% said NO, ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030

The respondents were also asked to consider the possible role that quantum computing might make in creating ethical AI systems. The prompting question was:

Quantum computing? How likely is it that quantum computing will evolve over the next decade to assist in creating ethical artificial intelligence systems?

  • Very likely
  • Somewhat likely
  • Somewhat unlikely
  • Very unlikely

In all, 551 respondents answered this question: 17% said “very likely”; 32% said “somewhat likely”; 27% said “somewhat unlikely”; and 24% said “very unlikely.”

The follow-up prompt to elicit their open-end written answers was:

Follow-up on quantum computing (written elaboration). If you do not think it likely that quantum computing will evolve to assist in building ethical AI, why not? If you think that will be likely, why do you think so? How will that evolution unfold and when? Will humans still be in the loop as AI systems are created and implemented?

The web-based instrument was first sent directly to an international set of experts (primarily U.S.-based) identified and accumulated by Pew Research and Elon University during previous studies, as well as those identified in a 2003 study of people who made predictions about the likely future of the internet between 1990 and 1995. Additional experts with proven interest in digital health, artificial intelligence ethics and other aspects of these particular research topics were also added to the list. We invited a large number of professionals and policy people from government bodies and technology businesses, think tanks and interest networks (for instance, those that include professionals and academics in law, ethics, medicine, political science, economics, social and civic innovation, sociology, psychology and communications); globally located people working with communications technologies in government positions; technologists and innovators; top universities’ engineering/computer science, political science, sociology/anthropology and business/entrepreneurship faculty, graduate students and postgraduate researchers; plus some who are active in civil society organizations that focus on digital life; and those affiliated with newly emerging nonprofits and other research units examining the impacts of digital life.

Among those invited were researchers, developers and business leaders from leading global organizations, including Oxford, Cambridge, MIT, Stanford and Carnegie Mellon universities; Google, Microsoft, Akamai, IBM and Cloudflare; leaders active in the advancement of and innovation in global communications networks and technology policy, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR), and the Organization for Economic Cooperation and Development (OECD). Invitees were encouraged to share the survey link with others they believed would have an interest in participating, thus there may have been somewhat of a “snowball” effect as some invitees invited others to weigh in.

The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise. Some responses are lightly edited for style and readability.

A large number of the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background, and this was noted, when available, in this report.

In this canvassing, 65% of respondents answered at least one of the demographic questions. Seventy percent of these 591 people identified as male and 30% as female. Some 77% identified themselves as being based in North America, while 23% are located in other parts of the world. When asked about their “primary area of interest,” 37% identified themselves as professor/teacher; 14% as research scientists; 13% as futurists or consultants; 9% as technology developers or administrators; 7% as advocates or activist users; 8% as entrepreneurs or business leaders; 3% as pioneers or originators; and 10% specified their primary area of interest as “other.”

Following is a list noting a selection of key respondents who took credit for their responses on at least one of the overall topics in this canvassing. Workplaces are included to show expertise; they reflect the respondents’ job titles and locations at the time of this canvassing.

Sam Adams, 24-year veteran of IBM now senior research scientist in artificial intelligence for RTI International; Micah Altman, a social and information scientist at MIT; Robert D. Atkinson, president of the Information Technology and Innovation Foundation; David Barnhizer, professor of law emeritus and co-author of “The Artificial Intelligence Contagion: Can Democracy Withstand the Imminent Transformation of Work, Wealth and the Social Order?”; Marjory S. Blumenthal, director of the science, technology and policy program at RAND Corporation; Gary A. Bolles, chair for the future of work at Singularity University; danah boyd, principal researcher, Microsoft Research, and founder of Data and Society; Stowe Boyd, consulting futurist expert in technological evolution and the future of work; Henry E. Brady, dean of the Goldman School of Public Policy at the University of California, Berkeley; Tim Bray, technology leader who has worked for Amazon, Google and Sun Microsystems; David Brin, physicist, futures thinker and author of the science fiction novels “Earth” and “Existence”; Nigel Cameron, president emeritus, Center for Policy on Emerging Technologies; Kathleen M. Carley, director, Center for Computational Analysis of Social and Organizational Systems, Carnegie Mellon University; Jamais Cascio, distinguished fellow at the Institute for the Future; Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google; Barry Chudakov, founder and principal at Sertain Research; Adam Clayton Powell III, senior fellow, USC Annenberg Center on Communication Leadership and Policy; Christina J. Colclough, an expert on the future of work and the politics of technology and ethics in AI; Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for science, technology and innovation policy; Kenneth Cukier, senior editor at The Economist and coauthor of “Big Data”; Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that oversaw the UK’s initial networking developments; Rosalie Day, policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust; Abigail De Kosnik, director of the Center for New Media, University of California, Berkeley; Amali De Silva-Mitchell, futurist and consultant participating in global internet governance processes; Jeanne Dietsch, New Hampshire senator and former CEO of MobileRobots Inc.; Stephen Downes, senior research officer for digital technologies, National Research Council of Canada; Bill Dutton, professor of media and information policy at Michigan State University, former director of the Oxford Internet Institute; Esther Dyson, internet pioneer, journalist, entrepreneur and executive founder of Way to Wellville; Glenn Edens, professor at Thunderbird School of Global Management, Arizona State University, previously a vice president at PARC; June Anne English-Lueck, professor of anthropology at San Jose State University and a distinguished fellow at the Institute for the Future; Susan Etlinger, industry analyst for Altimeter Group; Daniel Farber, author, historian and professor of law at the University of California, Berkeley; Marcel Fafchamps, professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University; Seth Finkelstein, consulting programmer and Electronic Frontier Foundation Pioneer Award winner; Rob Frieden, professor of telecommunications law at Penn State, previously worked with Motorola and held senior U.S. policy positions at the FCC and National Telecommunications and Information Administration; Edward A. Friedman, professor emeritus of technology management at Stevens Institute of Technology; Jerome C. Glenn, co-founder and CEO of the futures-research organization The Millennium Project; Mike Godwin, former general counsel for the Wikimedia Foundation and author of Godwin’s Law; Kenneth Grady, futurist, founding author of The Algorithmic Society blog; Erhardt Graeff, researcher expert in the design and use of technology for civic and political engagement, Olin College of Engineering; Benjamin Grosof, chief scientist at Kyndi, a Silicon Valley AI startup; Glenn Grossman, a consultant of banking analytics at FICO; Wendy M. Grossman, a UK-based science writer, author of “net.wars” and founder of the magazine The Skeptic; Jonathan Grudin, principal researcher, Microsoft; John Harlow, smart-city research specialist at the Engagement Lab at Emerson College; Brian Harvey, emeritus professor of computer science at the University of California, Berkeley; Su Sonia Herring, a Turkish-American internet policy researcher with Global Internet Policy Digital Watch; Mireille Hildebrandt, expert in cultural anthropology and the law and editor of “Law, Human Agency and Autonomic Computing”; Gus Hosein, executive director of Privacy International; Stephan G. Humer, professor and director, Internet Sociology Department at Fresenius University of Applied Sciences in Berlin; Alan Inouye, senior director for public policy and government, American Library Association; Shel Israel, Forbes columnist and author of many books on disruptive technologies; Maggie Jackson, former Boston Globe columnist and author of “Distracted: Reclaiming Our Focus in a World of Lost Attention”; Jeff Jarvis, director, Tow-Knight Center, City University of New York; Jeff Johnson, professor of computer science, University of San Francisco, previously worked at Xerox, HP Labs and Sun Microsystems; Paul Jones, professor emeritus of information science at the University of North Carolina, Chapel Hill; Anthony Judge, editor of the “Encyclopedia of World Problems and Human Potential”; David Karger, professor at MIT’s Computer Science and Artificial Intelligence Laboratory; Frank Kaufmann, president of the Twelve Gates Foundation; Eric Knorr, pioneering technology journalist and editor in chief of IDG; Jonathan Kolber, a member of the TechCast Global panel of forecasters and author of a book about the threats of automation; Gary L. Kreps, director of the Center for Health and Risk Communication at George Mason University; David Krieger, director of the Institute for Communication and Leadership, based in Switzerland; Benjamin Kuipers, professor of computer science and engineering at the University of Michigan; Patrick Larvie, global lead for the workplace user-experience team at one of the world’s largest technology companies; Jon Lebkowsky, CEO, founder and digital strategist, Polycot Associates; Sam Lehman-Wilzig, professor and former chair of communication at Bar-Ilan University, Israel; Mark Lemley, director of Stanford University’s Program in Law, Science and Technology; Peter Levine, professor of citizenship and public affairs at Tufts University; Rich Ling, professor at Nanyang Technological University, Singapore; J. Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant; Nathalie Maréchal, senior research analyst at Ranking Digital Rights; Alice E. Marwick, assistant professor of communication at the University of North Carolina, Chapel Hill, and adviser for the Media Manipulation project at the Data & Society Research Institute; Katie McAuliffe, executive director for Digital Liberty; Pamela McCorduck, writer, consultant and author of several books, including “Machines Who Think”; Melissa Michelson, professor of political science, Menlo College; Steven Miller, vice provost and professor of information systems, Singapore Management University; James Morris, professor of computer science at Carnegie Mellon; David Mussington, senior fellow at CIGI and director at the Center for Public Policy and Private Enterprise at the University of Maryland; Alan Mutter, consultant and former Silicon Valley CEO; Beth Noveck, director, New York University Governance Lab; Concepcion Olavarrieta, foresight and economic consultant and president of the Mexico node of The Millennium Project; Fabrice Popineau, an expert on AI, computer intelligence and knowledge engineering based in France; Oksana Prykhodko, director of the European Media Platform, an international NGO; Calton Pu, professor and chair in the School of Computer Science at Georgia Tech; Irina Raicu, director of the Internet Ethics program at the Markkula Center for Applied Ethics; Glynn Rogers, retired, previously senior principal engineer and a founding member at the CSIRO Centre for Complex Systems Science; Douglas Rushkoff, writer, documentarian and professor of media, City University of New York; Jean Seaton, director of the Orwell Foundation and professor of media history at the University of Westminster; Greg Sherwin, vice president for engineering and information technology at Singularity University; Henning Schulzrinne, Internet Hall of Fame member, co-chair of the Internet Technical Committee of the IEEE and professor at Columbia University; Ben Shneiderman, distinguished professor of computer science and founder of Human Computer Interaction Lab, University of Maryland; John Smart, foresight educator, scholar, author, consultant and speaker; Jim Spohrer, director of cognitive open technologies and the AI developer ecosystem at IBM; Sharon Sputz, executive director, strategic programs, Columbia University Data Science Institute; Jon Stine, executive director of the Open Voice Network, setting standards for AI-enabled vocal assistance; Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy”; Brad Templeton, internet pioneer, futurist and activist, a former president of the Electronic Frontier Foundation; Ed Terpening, consultant and industry analyst with the Altimeter Group; Ian Thomson, a pioneer developer of the Pacific Knowledge Hub; Joseph Turow, professor of communication, University of Pennsylvania; Dan S. Wallach, a professor in the systems group at Rice University’s Department of Computer Science; Wendell Wallach, ethicist and scholar at Yale University’s Interdisciplinary Center for Bioethics; Amy Webb, founder, Future Today Institute, and professor of strategic foresight, New York University; Jim Witte, director of the Center for Social Science Research at George Mason University; Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool and the research lead for the UK government’s Digital Culture team; Warren Yoder, longtime director at Public Policy Center of Mississippi, now an executive coach; Jillian York, director of international freedom of expression for the Electronic Frontier Foundation; and Ethan Zuckerman, director, MIT’s Center for Civic Media, and co-founder, Global Voices.

A selection of institutions at which some of the respondents work or have affiliations:

AAI Foresight; AI Now Research Institute of New York University; AI Impact Alliance; Access Now; Akamai Technologies; Altimeter Group; American Enterprise Institute; American Institute for Behavioral Research and Technology; American Library Association; American University; American University of Afghanistan; Anticipatory Futures Group; APNIC; Arizona State University; Aspen Institute; AT&T; Atlantic Council; Australian National University; Bar-Ilan University; Benton Institute; Bloomberg Businessweek; Brookings Institution; BT Group; Canada Without Poverty; Carleton University; Carnegie Endowment for International Peace; Carnegie Mellon University; Center for a New American Security; Center for Data Innovation; Center for Global Enterprise; Center for Health and Risk Communication at George Mason University; Center for Strategic and International Studies; Centre for International Governance Innovation; Centre National de la Recherche Scientifique, France; Chinese University of Hong Kong; Cisco Systems; Citizens and Technology Lab; City University of New York; Cloudflare; Columbia University; Constellation Research; Convo Research and Strategy; Cornell University; Council of Europe; Data Across Sectors for Health at the Illinois Public Health Institute; Data & Society Research Institute; Data Science Institute at Columbia; Davis Wright Tremaine LLP; Dell EMC; Deloitte; Digital Grassroots; Digital Value Institute; Disney; DotConnectAfrica; The Economist; Electronic Frontier Foundation; Electronic Privacy Information Center; Enterprise Roundtable Accelerator; Emerson College; Fight for the Future; European Broadcasting Union; Foresight Alliance; Future Today Institute; Futuremade; Futurous; FuturePath; Futureproof Strategies; General Electric; Georgetown University; Georgia Tech; Global Business Network; Global Internet Policy Digital Watch; Global Voices; Google; Hague Centre for Strategic Studies, Harvard University; Hochschule Fresenius University of Applied Sciences; Hokkaido University; IBM; Indiana University; Internet Corporation for Assigned Names and Numbers (ICANN); IDG; Ignite Social Media; Information Technology and Innovation Foundation; Institute for the Future; Instituto Superior Técnico, Portugal; Institute for Ethics and Emerging Technologies; Institute for Prediction Technology; International Centre for Free and Open Source Software; International Telecommunication Union; Internet Engineering Task Force (IETF); Internet Society; Internet Systems Consortium; Johns Hopkins University; Institute of Electrical and Electronics Engineers (IEEE); Ithaka; Juniper Networks; Kyndi; Le Havre University; Leading Futurists; Lifeboat Foundation; MacArthur Research Network on Open Governance; Macquarie University, Sydney, Australia; Massachusetts Institute of Technology; Menlo College; Mercator XXI; Michigan State University; Microsoft Research; Millennium Project; Mimecast; Missions Publiques; Moses & Singer LLC; Nanyang Technological University, Singapore; Nautilus Magazine; New York University; Namibia University of Science and Technology; National Distance University of Spain; National Research Council of Canada; Nonprofit Technology Network; Northeastern University; North Carolina State University; Olin College of Engineering; Pinterest; Policy Horizons Canada; Predictable Network Solutions; R Street Institute; RAND; Ranking Digital Rights; Rice University; Rose-Hulman Institute of Technology; RTI International; San Jose State University; Santa Clara University; Sharism Lab; Singularity University; Singapore Management University; Södertörn University, Sweden; Social Science Research Council; Sorbonne University; South China University of Technology; Spacetel Consultancy LLC; Stanford University; Stevens Institute of Technology; Syracuse University; Tallinn University of Technology; TechCast Global; Tech Policy Tank; Telecommunities Canada; Tufts University; The Representation Project; Twelve Gates Foundation; United Nations; University of California, Berkeley; University of California, Los Angeles; University of California, San Diego; University College London; University of Hawaii, Manoa; University of Texas, Austin; the Universities of Alabama, Arizona, Dallas, Delaware, Florida, Maryland, Massachusetts, Miami, Michigan, Minnesota, Oklahoma, Pennsylvania, Rochester, San Francisco and Southern California; the Universities of Amsterdam, British Columbia, Cambridge, Cyprus, Edinburgh, Groningen, Liverpool, Naples, Oslo, Otago, Queensland, Toronto, West Indies; UNESCO; U.S. Geological Survey; U.S. National Science Foundation; U.S. Naval Postgraduate School; Venture Philanthropy Partners; Verizon; Virginia Tech; Vision2Lead; Volta Networks; World Wide Web Foundation; Wellville; Whitehouse Writers Group; Wikimedia Foundation; Witness; Work Futures; World Economic Forum; XponentialEQ; and Yale University Center for Bioethics.

Complete sets of credited and anonymous responses can be found here:

Credited Responses: The Future of Ethical AI Design

Anonymous Responses: The Future of Ethical AI Design

Sign up for The Briefing

Weekly updates on the world of news & information