[summary: A 'preference framework' is a way of deciding which outcomes an agent terminally prefers. 'Preference framework' is a broader term than 'utility function', since 'preference framework' would also include structurally complicated meta-utility functions, such as those which appear in some proposals for Utility indifference or Moral uncertainty.]
A 'preference framework' refers to a fixed algorithm that updates, or potentially changes in other ways, to determine what the agent [ prefers] for terminal outcomes. 'Preference framework' is a term more general than 'utility function' which includes structurally complicated generalizations of utility functions.
As a central example, the utility indifference proposal has the agent switching between utility functions $~$U_X$~$ and $~$U_Y$~$ depending on whether a switch is pressed. We can call this meta-system a 'preference framework' to avoid presuming in advance that it embodies a VNM-coherent utility function.
An even more general term would be [decision_algorithm] which doesn't presume that the agent operates by preferring outcomes.