Artificial intelligence is not a switch that will be flipped from
“off” to “on.” It is a variable slider, which is constantly being
pushed forward from “less” to “more.” And as we move that
slider, our society will have to adapt, thinking through large-scale, complex implications and determining how we need to
approach the changes. The process will not be easy, and it will
not be fast—which is precisely why we need to begin forecasting
and preparing now, before the pace of innovation exceeds our
ability to catch up … if it hasn’t already.
This piece will examine artificial intelligence from several
vantages. First, we’ll look at the technology perspective: What
is happening today, and how do we expect the technology to
evolve in the future? Then we’ll look at the societal side of AI:
What might this technology mean for larger social structures?
Finally, we’ll examine artificial intelligence from a policy perspective: What kinds of questions do we need to be asking now
to prepare ourselves for a future where machines might have
more control over our lives than we do?
We hope to provide a broad blanket of information that
might serve as a foundation for the types of discussions we need
to be having right now. But that all begins with one important
What, exactly, is artificial intelligence?
The term “artificial intelligence” is easy to misinterpret. Part
of the problem is that artificial intelligence seems self-explanatory: thinking and processing (“intelligence”) that is designed
and constructed (“artificial”). In other words, AI is just a smart
Not quite. And here we find another part of the problem: We
tend to think of technology in terms of hardware. But “
intelligence” is not synonymous with “brain,” so it would be a mistake
to think of AI as a particularly clever computer or device. Rather,
at least for the purposes of this discussion, we’re talking about
systems and programming—the ability to think and, ultimately,
The phrase “artificial intelligence” doesn’t refer to a smart
machine; it refers to that machine’s ability to perceive, reason,
and learn, ideally using those processes to improve its problem-solving abilities.
The applications of AI are virtually endless, but we should
not conflate intelligent systems with their role in, say, helping
car cameras distinguish potentially hazardous situations. Ar-
tificial intelligence—or at least some prototypical version of
it—exists in the systems that allow the car to “see” the road and
distinguish the car driving in front of you from a pedestrian
rushing out into the road. The car itself is just another piece
of technology that takes advantage of artificial intelligence. So
when we talk about AI, we’re talking about the programming in
the computer, rather than one car, or even a line of cars that are
beginning to make decisions.
Pop culture can also offer us tools and frameworks for
thinking about artificial intelligence. Perhaps one of the most
well-known examples of AI in pop culture come from the
Terminator movie franchise. The vision of AI is relatively
straightforward—and quite grim. An artificially intelligent
program named Skynet is activated by the military in an attempt to secure peace. However, Skynet decides that humans
are an inherent threat to peace, thus the most efficient way of
safeguarding the world is to destroy all humans. (The U.K.,
apparently, saw no irony when it named its fleet of military communications satellites Skynet.[ 2])
My point isn’t to reinforce fears that artificial intelligence will
be the downfall of humanity (AI experts are doing plenty of that
themselves—just ask Stephen Hawking, who issued an alarming opinion on AI just two years ago: “The development of full
artificial intelligence could spell the end of the human race”[ 3]).
Rather, it’s to illustrate how Terminator’s Skynet was an artificially intelligent system, not just a single machine. It was a program
that was given a task (vaguely, “maintain peace”), presumably
ran through large amounts of data and possible scenarios, and
came to a conclusion: Humans are the biggest obstacle to peace.
Today, we’re attempting to build AI systems that perform
many of those same tasks, just on much smaller scales.
The biggest trend in artificial intelligence right now is deep
learning, a process by which a system examines information
through several different layers to determine whether that information fits into a particular schema. Deep learning systems,
sometimes called neural networks, allow an app like Facebook to
recognize individual people in photos. The lowest levels of these
networks look at the simplest pieces of information. Usually,
these are basic shapes and fragments of edges—a small selection
of the overall picture. Each higher level looks at information of
increasing complexity, until you reach the highest levels, which
take all of that data in aggregate and use it to determine whether
this is a picture of you or your best friend.
But here’s the catch about deep learning: These systems aren’t
programed to break things up that particular way and use that
process to come to their conclusions. The systems are teaching
themselves to do it. As Fortune magazine described the process:
In essence, these AI programs are figuring things out on their
own. It’s like giving someone a radio and watching them take it
apart to figure out how the radio works. Except, with computer
systems, we can give them huge numbers of metaphorical radios