Making AI Good for Humanity by William Powers

日本語で読む
The Olympic Games aren't just the world's preeminent athletic competition. Thanks to the explosion of electronic media over the last century -- first radio, then television, now the Internet -- they have evolved into extraordinary gatherings of humanity writ large.
Making AI Good for Humanity by William Powers
Photo: peterhowell / Getty Images

Six of the seven most watched television broadcasts of all time are the Summer Olympics from 1996 through 2016. Each of those Games drew a TV audience representing half or more of the world's total population at the time. That doesn't include the many others who followed the action on computers and, more recently, mobile devices. It's easy to forget that the excitement and pleasure of watching a swim race or gymnastics performance live on one's own smartphone or tablet has only been possible since the London Olympics in 2012.

Mobile devices powered by the latest advances in artificial intelligence (AI) will accompany the athletes to the Tokyo 2020 Games. And they will enable billions of others around the globe to follow these Games start to finish. In short, these technologies are the connective tissue that will bring us all together for the 2020 Summer Games for an extraordinary public meeting of the citizens of Planet Earth.

The purpose and main draw of the Games is, of course, the athletic contests. But because it's a truly worldwide event in which most nations participate and anyone with a device can watch, it's also a rare chance for us to collectively step back and reflect on important global questions and challenges.

This has happened in past Olympics when wars, economic crises, and various political and social issues drew new attention and scrutiny during the Olympics, and sometimes in the Games themselves. One famous example is the moment from the 1968 Summer Games when American track-and-field medalists Tommie Smith and John Carlos raised their black-gloved fists to show solidarity with oppressed Black people around the world.

In the run-up to the Tokyo 2020 Summer Olympics, the pandemic has understandably dominated the world's attention for the last year and a half. But all the while, other important global questions have been simmering away in the background. The Tokyo Games could be an opportunity to turn our attention back to some of these challenges and restart the conversation.

One is the debate about the growing power of the very technologies that are drawing us together for this event: Do AI and the global industry driving its growth pose a long-term threat to human values and well-being?

AI is transforming how the world works in manifold ways. It will unquestionably play a huge role in defining the future of every society on earth. Unlike earlier tech revolutions, this one permeates human existence on every conceivable level, from how governments and other large organizations operate to the fabric of our individual lives and relationships. In other words, though the future of AI is often framed as an abstract policy question, it's anything but.

These astonishing technologies help us in countless ways. If we use them wisely, they can seriously enrich our existence, now and going forward. But they also have downsides. Some, such as digital addiction and attention issues in children, are already well known. Others are still emerging and yet to be fully understood or brought to wide public consciousness.

Among them is the rise of what Shoshana Zuboff, a Harvard Business School professor emerita, has called Surveillance Capitalism. This refers to the business model that turns personal data gathered from smartphones and other AI-based devices into a commodity that's sold for profit.

In Zuboff's telling, the mining of personal data by tech companies in Silicon Valley and elsewhere is the modern equivalent of the European colonization of the Americas and other parts of the world beginning in the 15th century. What's being colonized today aren't geographic territories long inhabited by indigenous peoples. It's our lives, relationships, thoughts and experiences.

Your smartphone isn't just a communications device, it's a tool for harvesting information about you and your life, which is then sold to advertisers and other commercial entities. The same goes for the "smart speakers" on our kitchen counters, the games our children play online, and myriad other parts of our tech-mediated lives. The tech industry is selling our data to other companies that use it to sell their own products and services. The more data the companies can gather, the better for their bottom lines. Increasingly, they also seek to shape our behavior to serve their business objectives.

"The goal is to automate us," Zuboff said in a 2019 interview. Through her book The Age of Surveillance Capitalism, she has ignited a debate about these questions, urging the public to seek government regulation and oversight of AI. Some observers shrug their shoulders and say that will never happen because the global tech establishment has all the power. They're calling the shots and will define the future.

Others disagree, arguing that now is the moment for governments around the world to take steps to ensure we're moving towards an AI future that is safe, humane and constructive. My colleague Iyad Rahwan and I sketched a vision for this future in an op-ed we wrote last year for The Boston Globe, arguing that we can make the most of the upsides of AI -- which are massive and still emerging -- while reining in the downsides.

Many democratic societies are beginning to take action, as policymakers in various parts of the world work on this challenge. In terms of AI research, development and business volume, the global tech industry is currently dominated by two countries, the United States and China. But other nations are leading the quest for human-positive artificial intelligence.

One major center for this new thinking is the European Union. In 2019, the European Commission created a high-level expert group on AI that published a document entitled, "Ethical Guidelines for Trustworthy Artificial Intelligence." Rooted in fundamental human rights and ethical principles, it defines seven requirements that AI systems must meet in order to be considered trustworthy: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, nondiscrimination and fairness; societal and environmental well-being; and accountability.

That same year, the Japanese government published a report laying out "Social Principles for Human-Centric AI". It states that "the utilization of AI must not infringe upon the fundamental human rights guaranteed by the Constitution and international standards. AI should be developed, utilized, and implemented in society to expand the abilities of people and allow diverse people to pursue their own well-being."

Specifically addressing data and surveillance, the Japanese report says: "We should make sure that any AI using personal data and any service solutions that use AI, including use by the government, do not infringe on a person's individual freedom, dignity or equality."

It's striking that the European Union and Japan are among those leading the way, given that both have strong track records as technology pioneers.

The printing press took off in Europe beginning in the late 15th century, and Europe was a leader in the industrial revolution beginning in the 19th century. Thanks to the groundbreaking work of computer scientist Alan Turing and his colleagues in the middle decades of the 20th century, Britain helped usher in the age of artificial intelligence.

Later in the 20th century, Japan's technology industry shaped the early stages of the digital revolution. Today Japanese computer scientists and tech companies are doing world-leading work in robotics and other aspects of AI.

In other words, these two reform-minded societies are not coming at the AI challenge as tech novices or outsiders. Because of their deep roots in both science and innovation, they speak with authority on the ethical and moral questions now coming to the fore.

Will they win the day? Can the AI future be shaped to benefit not just industry, but human growth and flourishing?

We live in a very competitive world and technology is one of the most competitive sectors. In addition to being an engine of economic growth, it can be a source of national pride. But competition can also bring out the best in people, both individually and collectively.

Nations large and small are gathering in Tokyo this summer for a competitive ritual rooted in the best traditions of collective human striving for excellence. In the years to come, these same nations could come together in new ways to build a human-centric AI future like the one envisioned by the Japanese and European government bodies.

They can do this by coalescing around a set of principles that will take the digital age to an ethical, healthy, constructive future, and then ensuring that the tech industry follows them. This is not an easy task and it won't happen overnight. But as every Olympian knows, all great achievements require hard work, dedication and focus.

If we succeed in making AI good for humanity, future generations will thank us for it.

William Powers

High-res headshot William Powers[2].jpeg
A former Washington Post staff writer and MIT research scientist. Visiting scholar at the Center for Humans and Machines at the Max Planck Institute in Berlin and CEO of Because Humanity. He is the author of the New York Times bestseller, "Hamlet's BlackBerry: Building a Good Life in the Digital Age." Widely praised for its early insights on the negative impacts of the digital revolution, the book grew out of research he did as a fellow at Harvard. His work is now centered on identifying ways to ensure that artificial intelligence reflects human values and supports social progress.He lives in Massachusetts, USA.