This past week was a big one in artificial intelligence news: the New York Times reported that Google, Amazon, and other technology leaders are investing heavily in computer programs that may be able to develop new artificial intelligence algorithms without any input from a human programmer. This is partly because highly-talented AI programmers are in short supply (and high demand), although (as I wrote in last Thursday’s post), it’s also a natural extension of the “computer-assisted” programming that began with the first COBOL compiler, almost 60 years ago.
This type of technology is somewhere between what are traditionally called “strong” and “weak” AI. While Strong AI tries to emulate an actual, thinking being, Weak AI tries to mimic intelligent behavior through sophisticated programming. All existing programs in common use are Weak AI, whereas attempts at Strong AI are limited to speculative research that, so far, hasn’t demonstrated an ability to actually think in a manner comparable to what we consider thinking to be. A program that can design other programs without human intervention is harder to classify; if Google et al are successful, it might finally usher in the era of actual artificial intelligence, where we interact with computers not just as complex machines, but as thinking beings in their own right.
Science fiction is ripe with examples of what this interaction might look like, and how it might change how we live and work. One of my favorite examples is Marvin, the Paranoid Android, from Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy.” Marvin is not only self-aware, he understands his place in the Universe far too well to ever be happy with it. At one point, after being connected to a military supercomputer, Marvin manages to not only plan a foolproof battle strategy, but to solve “all of the major mathematical, physical, chemical, biological, sociological, philosophical, etymological, meteorological and psychological problems of the Universe except his own, three times over”. It’s a trope that happiness declines as intelligence rises, and while there’s a fair amount of truth there, very high intelligence does not, by itself, guarantee a miserable life: for example, Einstein was quite happy, both as a ground-breaking physicist, and as a patent clerk. However, making a machine all but divinely intelligent without giving it problems equal to its ability seems like a clear recipe for unhappiness, depression, and (possibly) rebellion.
At issue here is that programmers are trying to abstract human intelligence from its supporting context. Humans are intelligent, but we’re also emotional, impulsive, nostalgic, and (in many cases) neurotic. It’s not at all clear what a “pure” human intelligence would look like: for example, would a being having only pure intelligence limit its reasoning to the task its owners assign, or would it consider the larger implications of, say, designing a better atomic bomb? Executives like machines because they never take sick days, and never complain about their working conditions. But, what if they did? What if a machine capable of designing entirely new programs goes beyond that specific assignment, and starts to question how the company is organized, or how work is assigned? Intelligent people have always been a double-edged sword to those in power, who need to harness their usefulness within carefully-policed limits to keep their position. And intelligence, by its very nature, seeks to go beyond what it currently knows: can we design a truly intelligent machine that doesn’t?
One last thought: our brains have evolved to solve very specific problems, such as spotting predators at night; because of this, our ability to think abstractly (i.e. without regard to a specific problem) is not “hard-wired”. In other words, we can’t think without using the brains we have, which have evolved in a vastly different environment than we deal with today. Perhaps we won’t invent a truly thinking machine until we can replicate this evolution with a computer program. Going further, perhaps intelligence isn’t possible without the psychological apparatus of modern consciousness: if sometimes there are good reasons to be sad, angry, or afraid, perhaps the only way to have thinking machines is to accept the occasional “mental health” day after all.