A simplified/easier hypothetical form of the [ known algorithm nonrecursive] path within the Value achievement dilemma. Suppose there was an effective world government with effective monitoring of all computers; or that for whatever other imaginary reason rogue AI development projects were simply not a problem. What would the ideal research trajectory for that world look like?
Usefulness:
- Highlight / flag where safety shortcuts are being taken because we live in the non-ideal case.
- Let us think through what a maximally safe development pathway would look like, and why, without stopping every 30 seconds to think about how we won't have time. This may uncover valuable research paths that could, on a second glance, be done more quickly.
- Think through a simpler case of a research-program-generator that has fewer desiderata and hence less cognitive distractions.