Posts Tagged ‘Existential Risk’

This is Not A Military Endeavor

Sunday, November 7th, 2010

Recently on Ben Goertzel’s blog, he mentioned that he’s had the experience of FAI-minded persons telling him that if he got too close to AGI he may have to be killed, and then discussing how this may occur.  Clearly, these were not advanced rationalists. I know this blog has a small audience and such ideas are probably very rare, but given the style of my material, I feel like making a statement. Given my background in gaming,  science-fiction, and fantasy, I find martial imagery and allegory inspiring. As someone concerned about existential risk, I find the idea of martial action irresponsibly foolish.

If the world gets to a point where it’s likely that unfriendly AI is about to be released, it’s incredibly unlikely that there will be only one group ready to do so. It is also likely there will be many more AGI teams than there are today. Trying to stop them through attrition would be ridiculous. Even in such a situation, more widespread action would be possible through the (non-lethal!) actions of governments convinced to step in, or research coalitions convinced to step back, by the efforts of organizations like SIAI, FHI, etc. This may be unlikely and the effect small, but it is far more likely and helpful than having any kind of meaningful effect through attempted assassination. If AGI researchers were killed, significant blame will inevitably land on groups like SIAI. To say that this will hurt their credibility is an understatement.

This is likewise true with mentioning the idea of assassination to AGI researchers. The only way such threats could ever be prohibitive would be if they were actually thought to be legitimate, and those expectations can only exist in a world in which groups like SIAI have had their intellectual and academic reputations destroyed. Not to say that destroying such groups’ reputations also makes those threats credible; death threats do far more to tear apart reputations than they do to cause fear.

There is too much at stake here to be stupid about this. It is not too hard for our evolved intuitions to suggest that we solve problems by eliminating “opponents”. But then, it’s not too hard for our evolved intuitions to screw us over generally. A second Unabomber would get to feel like a hero and a soldier, and everyone else would get to pay for it.

As I said in my first post on this blog, our enemies are human error and human hatred, not human beings.

Princess Mononoke – The World of the Dead

Wednesday, May 19th, 2010

April 11th, 2010

Multiplication

Wednesday, May 19th, 2010

April 6th, 2010
There are currently about 6,800,000,000 people in the world. There were up to 1,800,000 people at Barack Obama’s inauguration. That’s a pretty big number. The picture below probably contains over half of them.
Take a look. (You can click on the picture for a larger version.) Assuming it contained everyone who attended the event, and assuming population growth utterly and suddenly halted, you’d be looking at about 0.026% of the people who would be affected by an existential disaster.

(At Least) It’s Not The End Of The World

Wednesday, May 19th, 2010

April 2nd, 2010
From the Super Furry Animals

“We can live it large,
cause we’re only old once.
Let’s make a difference.

Turn all the hate in the world,
into a Mockingbird.
Make it fly away…”

Metropolis

Wednesday, May 19th, 2010

January 26th, 2010

This is a really great film, one of my favorites. It’s based loosely on the German classic, using a comic by the creator of Astroboy (after he died and couldn’t stop them), and written by the creator of Akira. The basic plotline is that of two detectives foreign to the city, investigating a rogue scientist. They meet up with his creation as political turmoil erupts in the city. It’s generally fun to watch throughout and the ending is fantastic. In particular relevance to this blog, it displays both the folly of anthropomorphizing AIs, and the existential disaster they can cause. It’s not like a lesswronger wrote it, but in comparison to most such films it’s excellent.

Some additional quick reasons to reduce existential risk

Wednesday, May 19th, 2010

December 30th, 2009

Steven Kaas at Black Belt Bayesian posted a great list of “sound-bite” reasons to reduce existential risk. Check it out, they’re both amusing and relevant.

2010 Singularity Research Challenge

Wednesday, May 19th, 2010

December 29th, 2009
For the readers I don’t share, check out this short piece by Michael Anissimov on the Singularity Institute, their work, and a recap of why they’re a great place to donate.

You Said You Wanted Hugs

Wednesday, May 19th, 2010

November 29th, 2009

Why Friendly AI research is critical: even if you give an AI nice sounding directives, it can be hard to know how far such an alien mind will take something.  We take for granted all the other beliefs going around in our heads, such as that a hug shouldn’t be that strong, partly because we aren’t powerful enough to take things that far. The often discussed situation is that of just directing an AI to make people happy. What counts as a person? What counts as happy? What are acceptable ways to make people happy? You don’t want the AI to disassemble you into a hundred smaller “humans” and make them happy, or worse yet a bunch of microscopic pictures of happy people. You also don’t want it to put everyone into a drugged stupor. Designing a superintelligence is analogous to having a wish granting genie, but one of those annoying literal types for which almost every wish is a very bad thing.

“Accelerando” Review

Wednesday, May 19th, 2010

November 21st, 2009
(This review contains only negligible spoilers).

I just finished reading Accelerando by Charles Stross. It’s a masterpiece of hard science fiction, and I highly recommend it.
Vernor Vinge coined the term Singularity when he observed that no author could write realistically about smarter-than-human intelligences. Literature allows you to realize characters who are stronger or more outgoing, but the intelligence of their plans is limited by your ability to think of them. Superintelligences must be kept off screen or in some infant stage. Considering the likelihood of superhuman intelligence in our relatively near future, this makes writing really hard sci-fi a difficult endeavor. Vinge blurbs on the back cover of the book “There is an intrinsic unknowability about the technological singularity. Most writers leave it safely offstage or invent reasons why it doesn’t happen”, which applies to his own work as well. In comparison Stross’s Accelerando dives in headfirst, with mainstream (post)human civilization becoming essentially incomprehensible by the middle of the book.
Of course any real superintelligences still have to be kept off screen, and so the story follows characters who for one reason or another have been left behind by the tsunami of increasing intelligence. This creates the interesting effect that despite inhabitating lives stranger (and more probable) than those in the vast majority of sci-fi, the characters’s situations manage to feel very backwater. It’s as if you were following a family of Amish throughout the industrial and information revolutions, but more significant.
At a few points I grew dissapointed as it seemed Stross’s literary ambitions may have overcame him, with superintelligences mysteriously on our level, but by the end it all makes fair sense through one route or the other. I would have enjoyed those sections more if I knew that to begin with. There were also a small handful of differences between my own best estimates and the world of Accelerando. The Singularity is a tad on the slow side, and there always remain a number of independent, comparably powerful entities. This is probably my largest contention (it seems more likely for a single superintelligence to achieve dominance at some point), but there might not be much of a story if this were otherwise. There’s also no significant mention of emotional engineering, ala David Pearce’s Hedonistic Imperative or Nick Bostrom’s Letter From Utopia. A more sociological than technological point, but people are pretty nonchalant about creating and deleting copies of themselves. I care more about expectation of experience than identity, but as a preferential utilitarian I can get along just fine with those who think otherwise, as long as they don’t force that choice on others.
When I first heard about this book, the take-away message was the great density of concepts. The book is packed with advanced technological proposals, internet subculture references, unusual vocabulary, economics, neuroscience, history…it goes on. However Accelerando is much more readable than this would suggest, and most of the references are tangential, perfect understanding not required. The few times a concept is critical he takes a moment to explain it, and those interested can hunt down referenes on the net (try doing that 10 or 15 years ago). I’m admittedly not bleeding-edge (cutting maybe?) on speculative technology, but to the limits of my knowledge all the ideas are presented in a sober, best-guess fashion. To load the book with so many ideas takes quite an intellect or a great deal of work, and most likely both. Stross took 5 years to write this and has an impressive background, with degrees in pharmacy and computer science, and those who’ve known (biblical sense) WoW might be interested to learn he came up with death knights, back in the day.
The best and favorite thing I can say about this book is that it is mind boggling. A common criticism of sci-fi is that it takes one idea and places it in an otherwise changed world. Accelerando is just the opposite, which includes just about every feasible proposal and then mixes them in with additional ideas about their interaction. This allows for very unique and interesting turns of plot, and the book tends to put the reader in a constant state of future shock, continuing for 400 pages, even while in the relative backwater I mentioned above. The density of information and references adds to this effect nicely. The human mind suffers from the conjunction fallacy, and we’re more likely to put belief in speculation that is more specific, such as the setting of a book. Despite this I think Accelerando is excellent for improving our sense of the future, by reminding us that the future isn’t going to be one or a few new ideas, it’s going to be a great number of them, all interacting and creating ever newer ones. There are three meanings to the Singularity, one of which is that without intelligence enhancement, really understanding the world is something you’d have to entrust to others. It’s one thing to read about that kind of future but another to catch a glimpse of it, and that’s something that Accelerando provides.
Vinge also calls Accelerando “the most sustained and unflinching look into radical optimism I’ve seen”. While our own future could be much better, this really is a pretty optimistic book, which I like as I’m generally an optimist myself. It also presents some very possible dangers and threats. The future could be better than we’re physically capable of imagining, but there are thousands of ways it could go badly and it’d be worth it (understatement) to prevent those outcomes. The future may be incomprehensible, but for now we’re still in control, and still the most intelligent life on this planet. Let’s make the most of that, because it’s not going to last.

Aubrey de Gray, Eliezer Yudkowsky, and Peter Thiel on Changing the World

Wednesday, May 19th, 2010

November 3rd, 2009
While I’m confident that most of my readership also follows Michael Anissimov’s Accelerating Future blog, his posted video of a panel from the Singularity 2009 conference is so relevant to the topics of Normal Human Heroes that it would be criminal not to include it here. Really great discussion from some of the de facto leaders of the most critical and under appreciated fields.

Changing the World Panel — Singularity Summit 2009 — Peter Thiel, Eliezer Yudkowsky, Aubrey de Grey from Singularity Institute on Vimeo.