Ok, Eliezer, you've addressed my point directly with sapience0 / sapience1 example. That makes sense. I guess one pitfall for AI might be to keep improving its sapience model without end, because "Oh, gosh, I really don't want to create life by accident!" I guess this just falls into the general category of problems where "AI does thing X for a long time before getting around to satisfying human values", where thing X is actually plausibly necessary. Not sure if you have a name for a pitfall like that. I can try my hand at creating a page for it, if you don't have it already.