At this point, I think the evidence points away from there being any deeply useful form of optimality for agents in modal combat. The things you can do with provability logic are more restrictive, it turns out, than the things you can do with logical inductors, and for that reason most of my current thinking about decision theory is in that context.
Accordingly, I now think of modal combat as being a way of studying and illustrating a few basic concepts in decision theory for advanced agents, but not an essential part of MIRI's current research direction.