(Should I be replacing 'approval-directed' with 'act-based' in my future writing?)
The intended meaning is that the AI isn't trying to do long-range forecasting out to a million years later, that part is up to the humans. My understanding of your model of act-based agents is that you think act-based agents would be carrying out this forecast internally as part of their forecast of which short-term strategies humans would approve. A Genie doesn't model its programmers linking long-term outcomes to short-term strategies and then output the corresponding short-term strategies, a Genie implements the short-term goals selected by the programmers (which will be a good thing if the programmers have successfully linked short-term goals to long-term outcomes).