Re: "poking holes in things," what is an example of a proposal you would ask people to poke a hole in? Do you think that any of MIRI's agenda is in a state where it can have holes poked in it? Do you think that MIRI's results are in a state where holes can be poked in it? It seems like existing results are only related tangentially to safety, telling someone who cares about AI control that they should critique a result on logical uncertainty seems like a bad move. You can critique IRL on various grounds, but again the current state is just that there are a lot of open questions, and I don't know exactly what you are going to say here beyond listing the obvious limitations / remaining difficulties.
Comments
Eliezer Yudkowsky
Just to not leave you completely dangling here, how about utility indifference?