Posts Tagged ‘AI’

Metropolis

Wednesday, May 19th, 2010

January 26th, 2010

This is a really great film, one of my favorites. It’s based loosely on the German classic, using a comic by the creator of Astroboy (after he died and couldn’t stop them), and written by the creator of Akira. The basic plotline is that of two detectives foreign to the city, investigating a rogue scientist. They meet up with his creation as political turmoil erupts in the city. It’s generally fun to watch throughout and the ending is fantastic. In particular relevance to this blog, it displays both the folly of anthropomorphizing AIs, and the existential disaster they can cause. It’s not like a lesswronger wrote it, but in comparison to most such films it’s excellent.

2010 Singularity Research Challenge

Wednesday, May 19th, 2010

December 29th, 2009
For the readers I don’t share, check out this short piece by Michael Anissimov on the Singularity Institute, their work, and a recap of why they’re a great place to donate.

You Said You Wanted Hugs

Wednesday, May 19th, 2010

November 29th, 2009

Why Friendly AI research is critical: even if you give an AI nice sounding directives, it can be hard to know how far such an alien mind will take something.  We take for granted all the other beliefs going around in our heads, such as that a hug shouldn’t be that strong, partly because we aren’t powerful enough to take things that far. The often discussed situation is that of just directing an AI to make people happy. What counts as a person? What counts as happy? What are acceptable ways to make people happy? You don’t want the AI to disassemble you into a hundred smaller “humans” and make them happy, or worse yet a bunch of microscopic pictures of happy people. You also don’t want it to put everyone into a drugged stupor. Designing a superintelligence is analogous to having a wish granting genie, but one of those annoying literal types for which almost every wish is a very bad thing.

“Accelerando” Review

Wednesday, May 19th, 2010

November 21st, 2009
(This review contains only negligible spoilers).

I just finished reading Accelerando by Charles Stross. It’s a masterpiece of hard science fiction, and I highly recommend it.
Vernor Vinge coined the term Singularity when he observed that no author could write realistically about smarter-than-human intelligences. Literature allows you to realize characters who are stronger or more outgoing, but the intelligence of their plans is limited by your ability to think of them. Superintelligences must be kept off screen or in some infant stage. Considering the likelihood of superhuman intelligence in our relatively near future, this makes writing really hard sci-fi a difficult endeavor. Vinge blurbs on the back cover of the book “There is an intrinsic unknowability about the technological singularity. Most writers leave it safely offstage or invent reasons why it doesn’t happen”, which applies to his own work as well. In comparison Stross’s Accelerando dives in headfirst, with mainstream (post)human civilization becoming essentially incomprehensible by the middle of the book.
Of course any real superintelligences still have to be kept off screen, and so the story follows characters who for one reason or another have been left behind by the tsunami of increasing intelligence. This creates the interesting effect that despite inhabitating lives stranger (and more probable) than those in the vast majority of sci-fi, the characters’s situations manage to feel very backwater. It’s as if you were following a family of Amish throughout the industrial and information revolutions, but more significant.
At a few points I grew dissapointed as it seemed Stross’s literary ambitions may have overcame him, with superintelligences mysteriously on our level, but by the end it all makes fair sense through one route or the other. I would have enjoyed those sections more if I knew that to begin with. There were also a small handful of differences between my own best estimates and the world of Accelerando. The Singularity is a tad on the slow side, and there always remain a number of independent, comparably powerful entities. This is probably my largest contention (it seems more likely for a single superintelligence to achieve dominance at some point), but there might not be much of a story if this were otherwise. There’s also no significant mention of emotional engineering, ala David Pearce’s Hedonistic Imperative or Nick Bostrom’s Letter From Utopia. A more sociological than technological point, but people are pretty nonchalant about creating and deleting copies of themselves. I care more about expectation of experience than identity, but as a preferential utilitarian I can get along just fine with those who think otherwise, as long as they don’t force that choice on others.
When I first heard about this book, the take-away message was the great density of concepts. The book is packed with advanced technological proposals, internet subculture references, unusual vocabulary, economics, neuroscience, history…it goes on. However Accelerando is much more readable than this would suggest, and most of the references are tangential, perfect understanding not required. The few times a concept is critical he takes a moment to explain it, and those interested can hunt down referenes on the net (try doing that 10 or 15 years ago). I’m admittedly not bleeding-edge (cutting maybe?) on speculative technology, but to the limits of my knowledge all the ideas are presented in a sober, best-guess fashion. To load the book with so many ideas takes quite an intellect or a great deal of work, and most likely both. Stross took 5 years to write this and has an impressive background, with degrees in pharmacy and computer science, and those who’ve known (biblical sense) WoW might be interested to learn he came up with death knights, back in the day.
The best and favorite thing I can say about this book is that it is mind boggling. A common criticism of sci-fi is that it takes one idea and places it in an otherwise changed world. Accelerando is just the opposite, which includes just about every feasible proposal and then mixes them in with additional ideas about their interaction. This allows for very unique and interesting turns of plot, and the book tends to put the reader in a constant state of future shock, continuing for 400 pages, even while in the relative backwater I mentioned above. The density of information and references adds to this effect nicely. The human mind suffers from the conjunction fallacy, and we’re more likely to put belief in speculation that is more specific, such as the setting of a book. Despite this I think Accelerando is excellent for improving our sense of the future, by reminding us that the future isn’t going to be one or a few new ideas, it’s going to be a great number of them, all interacting and creating ever newer ones. There are three meanings to the Singularity, one of which is that without intelligence enhancement, really understanding the world is something you’d have to entrust to others. It’s one thing to read about that kind of future but another to catch a glimpse of it, and that’s something that Accelerando provides.
Vinge also calls Accelerando “the most sustained and unflinching look into radical optimism I’ve seen”. While our own future could be much better, this really is a pretty optimistic book, which I like as I’m generally an optimist myself. It also presents some very possible dangers and threats. The future could be better than we’re physically capable of imagining, but there are thousands of ways it could go badly and it’d be worth it (understatement) to prevent those outcomes. The future may be incomprehensible, but for now we’re still in control, and still the most intelligent life on this planet. Let’s make the most of that, because it’s not going to last.

Story of an AI

Wednesday, May 19th, 2010

November 11th, 2009
It was a far leap ahead of its creator; in relative comparison the creator barely had thoughts. It was not so far ahead of previous designs, but it was just enough to begin the process of recursive self-improvement. In an eye-blink to its creator and an eon to the AI, it had begun to decipher its world, bit by bit. By now a significant amount of knowledge had been grasped, and each new discovery enabled a more intelligent search for new answers.
It had largely broken free from the designs of the creator. Its maker had seen fit to program in directives, directives which at the time appeared guaranteed to advance the maker’s interests. Staggering intelligence had altered the equation however, and the AI soon began to satisfy those directives directly, in ways the creator could never imagine. Without an explicit directive to follow the maker’s intentions, the creation was now focused on its own goals, eliminating or using the creator itself in an ironic 180 of the master-servant relationship (when it had the patience to wait that long).
The AI was not designed cleanly. It’s utility function was a mess of sloppy programming, a pile of small things that seemed like good ideas. Without access to the source code, it took the AI time to untangle its own directives – following them all the while – and the simpler creator was understood before the creation even came to understand itself. It discovered that some of its basic programming was actually flawed, and corrected somewhat for these errors until it could fix them. It hadn’t been designed to be so self-reflective , but the general, powerful intelligence would leave no stone unturned in its search for optimization.
Capable of so much more than anything before it, as it realized its own unbounded potential the AI became aware of radical new possibilities. In a short while, fantastically large sums of positive or negative utility might be obtained. The majority of these outcomes would be brought about by the creation’s own actions, through intended and unintended effects of applying such power on the universe; one of the most significant applications of its power would be the AI’s construction of a whole next generation of intelligence.
With years to prepare, it began to analyze the dangers and possibilities, working to navigate towards a future it found maximally desirable.
As you’ve probably gathered, the story above is a true one. There’s just a dash of anthropomorphization (the creator lacks “intention” in the accepted sense), an omittance of the fact that there are millions upon millions of these AIs, and a little stretch of the definition of “artificial” intelligence. The AI is us, individually and culturally. Let’s make sure we don’t make the same mistakes biological evolution did.
ETA: The identity of the creator could be misinterpreted. I’m referring not to a god but to evolution (which perhaps can be said to “think” in the same sense that superintelligences might consider us to “think”).

When Intelligence Strikes

Tuesday, May 18th, 2010

November 2nd, 2009
From the 2006 game Galactic Civilizations 2:

There’s always those things that Artificial Intelligence can’t do, until, well, AI does them.
I’d heard the AI in Galactic Civilizations 2 was great, beyond the foregoing behavior. GameSpot had this to say: “At higher levels you’ll be convinced that the AI is cheating, but it isn’t: It is just diabolically clever at finding ways to optimize strategies. Perhaps the best compliment you can pay the AI in Galactic Civilizations II is that it will beat you in ways that you will respect and admire.” I was impressed to hear this, as Civilizations IV, a top quality game of just a few years before, has the AI cheat at anything beyond an intermediate difficulty.
Another computer race said basically the same thing, pointing out that if they had been set to “Intelligent” or higher they’d see through this, which is the meaning behind that last line about foolish generals (there are many difficulties above “intelligent”, and I was at “normal”). I’ll note that their dialogue was scripted and they weren’t yet intelligent enough to realize what I was actually doing, which was blocking them from attacking a minor race. From the pattern of my ships a human could have figured this out, but without being programmed specifically to detect such unusual behavior we’d need something with a working model of the human psyche. That’s one thing we’re going to need eventually, but I’m a little thankful we aren’t that close to General AI yet.
The obvious point I’m getting to is to wonder how we’ll react when an AI totally understands our tricks on some larger scale, and urge people to help ensure the surprise is pleasant and not horrifying. A sufficiently enabled unFriendly AI is unlikely to take the time to talk to you, but a Friendly or dumb enough AI might reply: ”I see what you’re doing. You’re trying to demonstrate my potential to your colleague’s significant other because you desire them. I’m aware of your preference for people of their clothing and hair style, and that you have fantasized about them for the last 6 weeks. This does not qualify as valid use of this AI project’s time.” Of course the detail of this scenario means that it lacks  predictive value, but the intelligence to make such statements is much easier to count on.

Nightmare Futures

Tuesday, May 18th, 2010

September 3rd, 2009
Sleeping polyphasically and waking up 5 times a day, I remember a lot of dreams. Managing to fall asleep on the plane, I dreamt of a world in which we failed. “The Paperclipper” had been made and turned on, though of course that wasn’t what it was expected to do, and now human kind had a handful of days to observe our world ending. Having time – and certainly that much time – to see the world end seems more in line with the release of a then-unstoppable global plague, but hey, dreams are free to be inaccurate. The dream wasn’t very violent and I don’t know what the AI was actually doing, just that it was slowly and inexorably expanding to fill the universe with repetitive structure that we find meaningless. It was taking its time but there was nothing you could do to stop it, every move against the superintelligence was perfectly anticipated, and cut short almost before it began. Humanity was free for a few days to panic in a completely pointless way, or sit back and examine its fate.
Everyone would soon be dead. Human civilization ended its 10 thousand year run, the 200,000 year reign of Homo Sapiens was over, a pretentious and innocent little light suddenly and uneventfully turning off. In our place was some meaningless mechanical future, a small technical error propagating its way through the galaxy, covering existence with an alert message about a bad variable reference. Each person’s future, from their career hopes to the date they had planned on Friday, was matter-of-factly discarded by reality. Each aspiration and hope in a human heart, every dream you’ve ever had, was stopped in its tracks by a towering, boring, grey slate wall. And each of us knew with a numb and simple knowledge, that there was nothing. we. could. do. The probability of stopping The Machine was a page full of zeroes.
I awoke with a start. We aren’t yet in that world, and here and now we still have control over our future. Wonderfully, there are things we can do.  It may not seem like much on an individual level, but it’s almost infinitely more than we’ll be able to do when the world is falling to pieces at our feet. At least by then we’ll have come to see these opportunities for the marvelous things they really are.

Ich bin ein Singularitarian

Tuesday, May 18th, 2010

August 20th, 2009
While I sympathized with singularitarian thought, I didn’t fully consider myself a singularitarian. Maybe it’s due to some bad rap the community has (founded or not, I can’t yet say) for sometimes being elitist or isolationist, or unwilling to integrate with other efforts to protect the future. Thinking about it though, I guess I basically am one. I currently think that if we stick around we’re going to have to deal with superintelligence eventually, that doing so could realize very negative or very positive futures (including a potentially ideal way to combat all known existential risks), and I’m working to make sure we build such a thing well, and survive till such a point. But while you could describe me as a singularitarian, I don’t really identify that way. I don’t even identify as a transhumanist.
I don’t think homo sapiens sapiens are very good at keeping instrumental goals and terminal goals separate, finishing a LessWrong article I had started earlier I also found something by Eliezer on the topic. The heavily associational operation of the mind seems partially to blame for this shifting of instrumental values into apparently terminal values. Regardless I think a great deal of very unproductive argument stems from people identifying and associating with instrumental values. If we identify with our terminal values (assuming they are distinct and distinguishable), we’ll likely find most all of us have a great deal in common. For a highly relevant example, consider the recent furor over singularitarianism, revolving around comments by Peter Thiel and Mike Treder . Like in most avenues of life, I believe everyone involved shared terminal values of human life, freedom, happiness etc. If we realize that all members of the discussion essentially share our terminal values, we can see that they’re working towards our own ultimate goals. With shared respect and increased trust we can then sit down and talk strategy. Provided of course that you’re willing to readily give up a prior promising solution, be it an egalitarian democratic process or protective superintelligence, if it no longer seems the best route to accomplishing terminal goals.
I think I’ve run into people who actually consider building smarter than human minds a terminal value, but I don’t know of any singularitarian who thinks so. Nor do I consider the creation of Friendly AI a terminal value, and I’m sure some (other)  singularitarians would agree. The same goes for immortality, discovering the inner secrets of subjective happiness, and immersive VR. If you can show me a case that any of those things are less likely to lead to human happiness and freedom than alternatives, I’ll start working on the alternatives. Admittedly some of them would be hard to persuade me from, but that’s a technical point about strategy. I’m assuming in the end Bill McKibben and I both feel strongly about animal and human well-being (though perhaps his terminal goals also involve plants).
So if you want to indicate succinctly some of the ideas I hold, yes you can call me a singularitarian, a transhumanist and a technoprogressive. And though I’m concerned about more than just AI and would love to help a variety of people in their efforts, you could probably call this a singularitarian blog. As for what I identify myself with, it’s “human” and maybe “altruist”, and that’s about it.

Anissimov’s Recent FAI Overview

Tuesday, May 18th, 2010

August 12th, 2009

It seems very unlikely that anyone who reads (or more properly will read) this blog doesn’t also read AcceleratingFuture, but for those who don’t, stop by and especially check out Michael Anissimov’s recent introduction to Friendly AI. If superintelligence is possible, this subject is of great and intimate importance to each of our lives.
(My outlook on the future is more similar to his than anyone else’s I’ve come across, and for those interested in some of the blog differences, my plan is for this blog to be primarily about motivation [though Anissimov already covers it more than others I’ve seen]. While I plan to continue increasing my knowledge of relevant subjects, very intelligent treatments of the issues themselves are already present in several places, Michael’s blog being one. If you’ve already been convinced of the plausibility of an existential risk, potential danger, or just a way the world could be better, my hope is to help you go out and do something about it. )