Posts Tagged ‘Utilitarianism’

Where the Hell is Matt?

Wednesday, May 19th, 2010

March 5th, 2010
This is pretty cool, you can find out more at Matt’s website.

For those who don’t follow LessWrong: the human mind suffers terribly from scope insensitivity. I.e. experiments in which people pay much more to save one child, than they do to save 8. The brain just doesn’t multiply well, and the world is a huge place.

Story of an AI

Wednesday, May 19th, 2010

November 11th, 2009
It was a far leap ahead of its creator; in relative comparison the creator barely had thoughts. It was not so far ahead of previous designs, but it was just enough to begin the process of recursive self-improvement. In an eye-blink to its creator and an eon to the AI, it had begun to decipher its world, bit by bit. By now a significant amount of knowledge had been grasped, and each new discovery enabled a more intelligent search for new answers.
It had largely broken free from the designs of the creator. Its maker had seen fit to program in directives, directives which at the time appeared guaranteed to advance the maker’s interests. Staggering intelligence had altered the equation however, and the AI soon began to satisfy those directives directly, in ways the creator could never imagine. Without an explicit directive to follow the maker’s intentions, the creation was now focused on its own goals, eliminating or using the creator itself in an ironic 180 of the master-servant relationship (when it had the patience to wait that long).
The AI was not designed cleanly. It’s utility function was a mess of sloppy programming, a pile of small things that seemed like good ideas. Without access to the source code, it took the AI time to untangle its own directives – following them all the while – and the simpler creator was understood before the creation even came to understand itself. It discovered that some of its basic programming was actually flawed, and corrected somewhat for these errors until it could fix them. It hadn’t been designed to be so self-reflective , but the general, powerful intelligence would leave no stone unturned in its search for optimization.
Capable of so much more than anything before it, as it realized its own unbounded potential the AI became aware of radical new possibilities. In a short while, fantastically large sums of positive or negative utility might be obtained. The majority of these outcomes would be brought about by the creation’s own actions, through intended and unintended effects of applying such power on the universe; one of the most significant applications of its power would be the AI’s construction of a whole next generation of intelligence.
With years to prepare, it began to analyze the dangers and possibilities, working to navigate towards a future it found maximally desirable.
As you’ve probably gathered, the story above is a true one. There’s just a dash of anthropomorphization (the creator lacks “intention” in the accepted sense), an omittance of the fact that there are millions upon millions of these AIs, and a little stretch of the definition of “artificial” intelligence. The AI is us, individually and culturally. Let’s make sure we don’t make the same mistakes biological evolution did.
ETA: The identity of the creator could be misinterpreted. I’m referring not to a god but to evolution (which perhaps can be said to “think” in the same sense that superintelligences might consider us to “think”).