If in Newcomb's Problem Omega read the agent's source code and decided to reward only agents with an algorithm that output 'one box' by picking the first choice in alphabetical order, punishing all agents that behaved in exactly the same way due a different internal computation, then this would indeed be a rigged contest\. But in Newcomb's Problem, Omega only cares about the behavior, and not the kind of algorithm that produced it; and an agent can indeed take on whatever kind of behavior it likes; so, according to LDT, there's no point in saying that Omega is being unfair\. You can make the logical output of your currently running algorithm be whatever you want, so there's no point in picking a logical output that leaves you to die in the desert\.
due to