Friday, February 13, 2015

let's just hope they run out of gas

A lot of people who know things about artificial intelligence are concerned about where the whole thing is going. Will future Dick Cheneys get brain modifications so as to keep the baby koala soul cupboard better stocked? Will AIs treat humans the way humans treat cockroaches (kill them when you see them), lab rats (experiment on them), birds (mostly ignore them), pets (feel condescending affection for them), or some other way? Nobody knows what's coming (and they don't seem to have given much thought to where the energy will come from, post fossil fuels, either). Here's a summary of AI fears by Kevin Maney that covers important ground, makes some great points, and entertains a critical stance only so he can dismiss it.
It’s time to have a serious conversation about artificial intelligence. AI has crossed a threshold similar to the earliest triumphs in genetic engineering and the unleashing of nuclear fission. We nudged those discoveries toward the common good and away from disaster. We need to make sure the same happens with AI.
"We nudged those discoveries...away from disaster"? Disaster here must mean something like "human extinction," as opposed to, say, Nagasaki. It would be easy to breeze right past that sentence, processing it as "yeah, fission could have gone really badly (but it didn't)." It would be understandable if someone failed to notice how misleading this dichotomy is. We're to choose between disaster and the common good where "disaster" doesn't include Hiroshima and Nagasaki, the risks of future nuclear weapon use, including the possibility of human extinction; nuclear meltdowns that have already happened (Chernobyl, Fukushima, etc.), the risks of future nuclear meltdowns, the fact that nuclear waste has to be dealt with and is vulnerable to infrastructure failures perhaps unavoidable over the lengthy time frame involved; the broader implications of what more power for humans means generally in terms of "the common good." Maney's argument is, roughly, this -- the Cuban Missile crisis didn't end in human extinction (or death for important people -- more on that later) and power is good, therefore nuclear fission serves the common good.

The benefits of nuclear fission must be pretty spectacular to outweigh all those downsides. Rather than explore the "pro" arguments in detail (Maney doesn't mention them), I'll just use a handy shortcut and assure you that all those arguments presuppose what they'd need to prove. Arguments like "nuclear power is cleaner and more sustainable (on some theoretical level) than fossil fuels." "A is better than B" arguments. Better at what? At C. But we're talking about D. C is power. D is the common good. They're entirely different things. Talking about C sends D to the background, by design. This is an article that pretends to talk about the common good while making the case for power. Politicians, scientists, and mainstream journalists work for power.

Admittedly, while "common bad" is relatively simple, "common good" is somewhere between complex and impossible. Maybe nuclear fission, or AI, can work for the common good. But Maney isn't making that case. He's taking the status quo, these increasingly infotech-dependent societies, as his definition of the common good. He thinks nuclear fission has worked for the common good, and his (implied) evidence is "not dead yet." Power justifies itself.
Yet at the same time, we can’t not develop AI. The modern world is already completely dependent on it. AI lands jetliners, manages the electric grid and improves Google searches. Shutting down AI would be like shutting off water to Las Vegas—we just can’t, even if we’d like to. And the technology is pretty much our only hope for managing the challenges we’ve created on this planet, from congested cities to deadly flu outbreaks to unstable financial markets.
Maney insightfully describes the risks brought about by increasing socio-economic complexification by way of infotech, then advocates using the same technology that got "us" into this risky situation to get "us" out of it. Sounds more like an antihero story than a progressive redemption story. Think Walter White compounding previous errors, doubling down on bad bets, and bringing himself closer and closer to death.

Instead of making "The Case Against Artificial Intelligence" (the artice's title), Maney makes the case for it by taking the strongest and most obvious set of solutions (anything prioritizing cutting back) off the table. Imagine someone writing an article titled "The Case Against Smoking," quoting a couple medical professionals who say it's somewhat risky, then ruling out the possibility of quitting smoking as unthinkable. "Given that we have to smoke, we may as well be smart about it..."

"We can't not develop AI." Predictively, many humans this century will try to develop AI (or brain upgrades) to the point where it's no longer dependent on, or even influenceable by, human decisions. Whether they get there depends on peak oil, climate change, and a whole bunch of other complex variables, including humans themselves. But in this phrase, Maney isn't making predictions. He uses the term "we." Pardon the cheese, but where there's a we there's a way. He's suggesting agency and control over the situation, then, for all intents and purposes, denying that there is any. If humans, collectively, stopped building it, they could "not develop AI." If the humans in the science labs stopped building it, they could "not develop AI." If he'd like to stop it, but just can't, he could say "we have to stop it, but it's probably unrealistic to think we can." But if he thinks AI is something we (hint: he's not talking about people living below the poverty line) should go along with because getting off this techno ride would be unbearably painful (for him!) and he'd rather risk death and godknowswhat than give up his smartphone and his position in line, he could say that. But that wouldn't sound very good.
So we have time. But Musk, in particular, is saying that we shouldn’t waste it. There’s no question powerful AI is coming. Technologies are never inherently good or bad—it’s what we do with them. Musk wants us to start talking about what we do with AI. To that end, he’s donated $10 million to the Future of Life Institute to study ways to make sure AI is beneficial to humanity. Google, too, has set up an ethics board to keep an eye on its AI work. Futurist Ray Kurzweil writes that “we have a moral imperative to realize [AI’s] promise while controlling the peril.”
It's what we do with them? The fear is that "we" -- he means the very few in the vicinity of the steering wheel -- won't be able to drive the car at some point. He said it's already too late to turn back. If you imagine that history is working out pretty well so far thanks to good people in positions of power acting heroically -- the progressive faith -- I guess you can hope for more of the same. The suggestion that history is working out well so far depends if you're talking about the ghost of a Nagasaki victim or the CEO of a heavily subsidized nuclear power company, the vast majority of humans who live in poverty or the small minority with vacation homes.

It's possible the powerful could act somewhat more cautiously with AI than most new power sources. The usual pattern is to push costs, up to and including death, off on the rest of humanity. For the common good, of course. But in this case, powerful humans may be more cautious for the same reason they haven't nuked the planet yet -- self-preservation. To the extent they can, they'll try to externalize costs and set up a barrier to protect themselves. What extent that might be, they don't know.
It’s worth getting out ahead of these things, setting some standards, agreeing on some global rules for scientists. Imagine if, when cars were first invented in the early 1900s, someone had told us that if we continued down this path, these things would kill a million people a year and heat up the planet. We might’ve done a few things differently.
Nah, there was a Kevin Maney around back then saying everything will work out alright, somehow, raising the possibility that it wouldn't only to dismiss it. There were others who made forceful objections only to be ignored, or punished. And again with the "we." We the good guys. We the non-members of the set "humans living in poverty." We who are so committed to our carbon-fueled lives that we won't even consider giving them up. We can see here, again, what Maney means by disaster, where Nagasaki doesn't count. He's talking about disaster for people like him. That hasn't happened yet.

So, to summarize:

infotechpower: We're nuttier than a scientologist on meth and we're going for a joy ride. Woohoo!!

Kevin Maney: Whoahh, that sounds dangerous. I'm hesitant but...wait up! Let me come too. I'll be in charge of making sure everyone wears seatbelts.

ITP: Sure, whatever. *car speeds off* Woohoo, human heads!

KM: No disaster there. Guys, put on your seatbelts!

ITM: *laughing*

KM: I'm serious!...Please?

No comments: