This is Not A Military Endeavor

Recently on Ben Goertzel’s blog, he mentioned that he’s had the experience of FAI-minded persons telling him that if he got too close to AGI he may have to be killed, and then discussing how this may occur.  Clearly, these were not advanced rationalists. I know this blog has a small audience and such ideas are probably very rare, but given the style of my material, I feel like making a statement. Given my background in gaming,  science-fiction, and fantasy, I find martial imagery and allegory inspiring. As someone concerned about existential risk, I find the idea of martial action irresponsibly foolish.

If the world gets to a point where it’s likely that unfriendly AI is about to be released, it’s incredibly unlikely that there will be only one group ready to do so. It is also likely there will be many more AGI teams than there are today. Trying to stop them through attrition would be ridiculous. Even in such a situation, more widespread action would be possible through the (non-lethal!) actions of governments convinced to step in, or research coalitions convinced to step back, by the efforts of organizations like SIAI, FHI, etc. This may be unlikely and the effect small, but it is far more likely and helpful than having any kind of meaningful effect through attempted assassination. If AGI researchers were killed, significant blame will inevitably land on groups like SIAI. To say that this will hurt their credibility is an understatement.

This is likewise true with mentioning the idea of assassination to AGI researchers. The only way such threats could ever be prohibitive would be if they were actually thought to be legitimate, and those expectations can only exist in a world in which groups like SIAI have had their intellectual and academic reputations destroyed. Not to say that destroying such groups’ reputations also makes those threats credible; death threats do far more to tear apart reputations than they do to cause fear.

There is too much at stake here to be stupid about this. It is not too hard for our evolved intuitions to suggest that we solve problems by eliminating “opponents”. But then, it’s not too hard for our evolved intuitions to screw us over generally. A second Unabomber would get to feel like a hero and a soldier, and everyone else would get to pay for it.

As I said in my first post on this blog, our enemies are human error and human hatred, not human beings.

Tags: ,

Leave a Reply

You must be logged in to post a comment.