← Quora archive  ·  2012 Mar 05, 2012 04:39 PM PST

Question

If self-aware, intelligent, sentient AI turned out to be sociopathic, might it eventually determine humans are a danger to the existence of the planet and of the AI itself?

Answer

This does not require self-awareness, intelligence or sentience. All it needs is self-regulation loops with sufficient power. That's the foundation of the Gaia hypothesis and M. Night Shyamalan's The Happening. Those are overwrought, over-interpreted versions, but the basic idea is sound. You may want to look up the concept of autopoiesis.

http://en.wikipedia.org/wiki/Aut...

You are assuming that a cognitive loop is necessary (develop situation awareness, a mental model, adopt intentions, compute beliefs and desires to conclude that humans are bad, deliberately plan to get rid of them...). It is in fact the same fallacy as the one that leads to the "clock implies clockmaker" argument in "intelligent design." Just as dumb, blind 1-step-ahead evolution can produce complex life, equally dumb processes can "decide" to kill off a species.

In one sense this has already happened in the form of declining birth rates.

This seems related to/a follow-on to an earlier, more general question. This whole direction of questioning is conceptually weak, though it may make for entertaining sci-fi storytelling.