Questions like these seem to me to have obvious unbounded formulations. If we're talking about a modern policy-reinforcement neural network, then yes, the notion of a separable goal is more ephemeral. Does this seem to agree with your own state of mind, or would you disagree that we understand the notion of 'goal concepts' in unbounded formulations, or…?
A concept is something that discretely or fuzzily classifies states of the world, or states of a slice through the world, into positive or negative instances. A "goal concept", for a satisficing agent, then describes the set of worlds that it's trying to steer us into. The more general version of this is a utility function.