Paul, I'm having trouble isolating a background proposition on which we could more sharply disagree. Maybe it's something like, "Will relevant advanced agents be consequentialists or take the maximum of anything over a rich space?" where I think "Yes" and you think "No, because approval agents aren't like that" and I reply "I bet approval agents will totally do that at some point if we cash out the architecture more." Does that sound right?
I'll edit the article to flag that Nearest Neighbor emerges from consequentialism and/or bounded maximizing on a rich domain where values cannot be precisely and accurately hardcoded.