Marcus Hutter's AIXI is the perfect rolling sphere of advanced agent theory - it's not realistic, but you can't understand more complicated scenarios if you can't envision the rolling sphere. At the core of AIXI is Solomonoff induction, a way of using [ infinite computing power] to probabilistically predict binary sequences with (vastly) superintelligent acuity. Solomonoff induction proceeds roughly by considering all possible computable explanations, with [ prior probabilities] weighted by their algorithmic simplicity, and [Bayesian_update updating their probabilities] based on how well they match observation. We then translate the agent problem into a sequence of percepts, actions, and rewards, so we can use sequence prediction. AIXI is roughly the agent that considers all computable hypotheses to explain the so-far-observed relation of sensory data and actions to rewards, and then searches for the best strategy to maximize future rewards. To a first approximation, AIXI could figure out every ordinary problem that any human being or intergalactic civilization could solve. If AIXI actually existed, it wouldn't be a god; it'd be something that could tear apart a god like tinfoil.
[summary(Brief): AIXI is the [perfect rolling sphere] of advanced agent theory, an ideal intelligent agent that uses infinite computing power to consider all computable hypotheses that relate its actions and sensory data to its rewards, then maximizes expected reward.]
[summary(Technical): Marcus Hutter's AIXI combines Solomonoff induction, expected utility maximization, and the [ Cartesian agent-environment-reward formalism] to yield a completely specified superintelligent agent that can be written out a single equation but would require a high-level [ halting oracle] to run. The formalism requires that percepts, actions, and rewards can all be encoded as integer sequences. AIXI considers all computable hypotheses, with prior probabilities weighted by algorithmic simplicity, that describe the relation of actions and percepts to rewards. AIXI updates on its observations so far, then maximizes its next action's expected reward, under the assumption that its future selves up to some finite time horizon will similarly update and maximize. The AIXI$~$tl$~$ variant requires (vast but) bounded computing power, and only considers hypotheses under a bounded length $~$l$~$ that can be computed within time $~$t$~$. AIXI is a central example throughout value alignment theory; it illustrates the [ Cartesian boundary problem], the methodology of unbounded analysis, the Orthogonality Thesis, and [ seizing control of a reward signal].]
Further information: