This is an expanded version of an essay originally written for the National Intelligence Council. Computers were invented to augment human performance. They are powerful tools, but even as processing speeds increase and algorithms grow more sophisticated, these machines still cannot “think.” Eventually this will change. A group of leading scientists and public figures signed an open letter warning of the dangers of this moment. One famous scientist warned that “The development of full artificial intelligence could spell the end of the human race.” This kind of fear is not new. 1920 saw the first drama where evil robots take over the world. A group of distinguished scientists warned President Johnson in 1964 that “cybernation” (the mix of computing and automation) would destroy jobs and create widespread poverty. Yet robots do not rule the world and fifty years of computer automation has not made Americans poorer.
In 1957 John Von Neumann, a pioneering computer scientist and one of the inventors of nuclear weapons, wrote about The Singularity. Von Neumann’s definition of the Singularity was that it is the at which "technological progress will become incomprehensively rapid and complicated." Some would say that this point has already been reached, but the Singularity has come to mean the moment when computers attain greater intelligence than humans. We’ve all seen “Terminator.” In that series, Skynet was the fictional villain, a computer network designed for strategic defense that “wakes up” and becomes an intelligent entity, using its autonomous control over weapons to overpower humans.
Skynet is a caricature of the “Singularity” and it is indicative, perhaps, of larger changes in American culture that its plotline has gone from enjoyable kitsch to a matter for serious debate. Our culture enjoys dystopic fables, but they are misleading. The story we frighten ourselves with for AI is that autonomous systems will compete with, overtake, and replace humans. One focal point for apocalyptic fears is the “killer robot,” lethal, autonomous devices that can be used for combat. Frankly, if the choice is between sending people or robots into combat, the robot is the better pick. The best outcome would be that there is no combat, a noble but regrettably improbable outcome.
Autonomous devices are something of an illusion. A speaker at the Fourth World Internet Conference in China said, “we do not want the algorithms to take over the world.” The speaker was not Chinese—the Chinese retain an optimism in the likelihood progress through technology not always found in the United States—and in any case, he was wrong. Algorithms are written by humans and it is humans who will decide what they do and how they will be used.
The oft-cited fear that AI and autonomy will lead to the end of a quarter of the jobs found in the U.S. workforce as drivers are replaced is a good example of this fundamental misunderstanding. The effect of AI and autonomous decisionmaking by vehicles will be to reduce the number of accidents and increase fuel efficiency by reducing driver error. Full autonomy is far in the future and its introduction will be gradual. The average age of trucks and cars on U.S. roads is 12 years. Even if fully autonomous vehicles were available tomorrow, it could be a decade before they make up the majority of cars on the road.
Industrial automation began centuries ago and every phase has brought greater wealth and more jobs. Will the AI phase of automation be any different? Fears that machines destroy jobs have greeted new technologies since the start of the industrial revolution and every time they have proven to be wrong. New technologies eliminate some jobs, but they create others. For example, big companies once had “computer” offices. Dozens of people sat in these offices in front of adding machines crunching numbers by hand. These were good white collar jobs until mainframe computers replaced them. If the “technology-kills-jobs” story was true, Manhattan would be clogged with forlorn people in worn business suits holding signs saying, “Will Compute for Food.” Nor are the streets clogged with unemployed elevator operators. New technologies eliminate some jobs, but they create others. The overall effect is to make societies richer—and technological change is the only source of income growth for developed economies like the United States. These stories underestimate human ingenuity, the ingenuity not only to create autonomous devices but to control them.
AI is a tool. AI-driven automation makes people more productive and societies richer (how societies chose to share that increased wealth is another matter, and more of a problem for a stingy U.S. than many other developed countries). Automation is the best way out of the productivity trap that has caught the American economy. Innovation, with AI at the forefront, is the best source of future growth. What happens when every child carries Einstein in their pocket, and has an AI assistant to help them learn? The stress on industrial age education will be tremendous, but this is the start a new era of human knowledge. AI means fewer mistakes, lower prices, and better products. AI will create a world that is faster, more efficient, and smarter.
But the past, as they say, doesn’t always predict the future. Even if the precedent from past automation is wrong this time, those who argue that AI is different admit that their fears won’t be realized for decades. The immediate problem isn’t killer robots or unemployed drivers, but how to educate people to be productive in an artificial intelligence economy, how we invest in research for AI leadership, and how we allocate the additional wealth AI creates.
Fears about AI are more likely to be a proxy for the unavoidable social disruption created by digital technologies. When Gutenberg created moveable type, it changed how people thought and acted, eroding certainty in institutions and authority. This first “knowledge Revolution” led to centuries of turmoil. Digital technologies have the same result, but at a faster pace and with wide-ranging effect. They are transforming society, business and politics in ways that can appear, to use Von Neumann’s phrase, “incomprehensively rapid and complicated.” This is the paradox of the fears over AI. With the right design, AI will give us the tools to cut through the fog to let us manage and benefit from this new disruption.
James Andrew Lewis is a senior vice president at the Center for Strategic and International Studies in Washington, D.C.
0 Response to "Waiting for Skynet"
Post a Comment