AlphaGo’s Victory and the Need to Plan for General-Purpose A.I.

Back in 2017, an artificial intelligence program named Alpha Go beat the No. 1 ranked Go player in the world. Prior to this, the program had beaten other top Go players, one of whom said that that after playing the program, he realized he had been playing the game wrong his entire life.

While Deep Blue and Watson have defeated human players before, these programs were designed to play just one game. AlphaGo, on the other hand, is designed to solve problems: the programmer tells it the problem to solve, and what rules it has to follow. Then, the program repeatedly tries to solve it, each time improving on the best solution it has found so far. This isn’t quite the general-purpose intelligence of science fiction, but it’s significantly closer than anything that has come before.

Given this, we need to address the new challenges these program bring. The first is that, as anyone who has been embarrassed by Autocorrect knows, computer programs are literal: they do what they’ve been programmed to, and nothing more. This is important because a program like AlphaGo is designed to solve problems, and most problems aren’t as clearly defined as winning a game.

Consider identifying the best neighborhoods to sell insurance. When insurance companies tried using A.I. to do this, back in the 1990s, they found the programs excluded neighborhoods with a large minority population. That’s called redlining, and it’s (rightly) illegal, but the program doesn’t know that. And if a neighborhood is in fact riskier, the program will exclude it unless you tell it otherwise.

“Telling it otherwise,” however, can also be challenging: each constraint you place on the program forces it to make trade-offs between multiple goals. Typically, the programmer weights criterion, and the program tries to find the best solution given those weights. This works for finding the best route from manufacturer to point-of-sale, where the trade-off is between speed and cost. However, prioritizing private gain vs. the public good is much more difficult: if a program is going to pick winners and losers, everyone affected deserves to have some say in how it works.

The old problem of “Garbage In/ Garbage Out” is also relevant. Not every problem can be formulated as a decision problem, and some which can don’t have clear answers. Historically, the questions to be answered are set by policy-makers, not programmers. It’s wise to make sure they understand the limitations of this new technology–the questions it can’t answer–and the amount of high-quality data these programs need to come up with a good answer.

And, of course, the program won’t always be right: what then? Is it liable? Can you sue (and who would you sue)? We have the technology to diagnose certain cancers, and its been proven to be right significantly more often than medical experts. Yet, no one is going to take the machine’s word over the doctors until we decide what to do when the program is wrong, and someone dies because of it. That’s a conversation that needs to happen, and addressing the human biases that psychologists have spent the past 75 years cataloguing needs to be a part of it.

We can answer these questions, albeit imperfectly, and meet these challenges. I’m sure of that. But the inevitable transition to a society governed in part by intelligent machines will be easier if we start answering them now.

 

 

 

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *