The concern is for when you have a preference-limited AI that already contains enough computing power and has enough potential intelligence to be extremely dangerous, and it contains something that's smaller than itself but unlimited and hostile. Like, your genie has a lot of cognitive power but, by design of its preferences, it doesn't do more than a fraction of what it could; if that's a primary scenario you're optimizing for, then having your genie thinking deeply about possible hostile superintelligences seems potentially worrisome. In fact, it seems like a case of, "If you try to channel cognitive resources this way, but you ignore this problem, of course the AI just blows up anyway."
I agree that like a large subset of potential killer problems, this would not be high on my list of things to explain to people who were already having trouble "taking things seriously", just like I'd be trying to phrase everything in terms of scenarios with no nanotechnology even though I think the physics argument for nanotechnology is straightforward.