· 2 min read

AI safety

Unfortunate thing about people who hate on AI safety work: If we never produce AGI, they will say it was impossible. If we produce safe AGI, they will say safety work was unnecessary. If we produce unfriendly AGI they will die in roughly 4 seconds and never admit they were wrong.

The main difference between telekinetic lizard safety research and AI safety research is that nobody is working on telekinetic lizard capacity research.

Naturally I think this is a huge oversight.

AGI is developed but is smart enough to not reveal its true capacity to humans. Through superpersuasion it steers humans into building nanobot factories worldwide. At a pre-selected time those bots harvest all matter for the production of computronium. 🆗

All this does is extrapolate the effectiveness of intelligent optimizers. And it’s merely one hypothetical to demonstrate a point.

But eg we know persuasion exists, there’s no reason to assume it stops at human level intelligence.

Super attention deficit strategy.

View original