[summary: 'Cosmopolitan', lit. "of the city of the cosmos", intuitively refers to a broad, widely embracing standpoint that tolerates and appreciates other people (entities) whose ways may at first seem very strange to us; trying to step out of our small, parochial, local instincts. An alien civilization might at first seem completely bizarre to us, but if we really understood what was going on and tried to open our minds and hearts, we'd see it was a galaxy no less to be valued than our own.
People who feel strongly about this 'citizen of the cosmos' perspective often start out with a strong prior that anyone talking about not just letting AIs do their own thing must be taking a parochial, humans-first, carbon-chauvinist viewpoint. To which at least some AI alignment theorists reply: "No! You don't understand! We are cosmopolitans! We also grew up reading science fiction about aliens that turned out to have their own perspectives, and AIs willing to extend a hand in friendship being mistreated by carbon chauvinists! But paperclip maximizers are really genuinely different from that! We predict that if you got to see the use a paperclip maximizer would make of the cosmic endowment, you'd be as horrified as we are; we have a difference of empirical predictions about what happens when you run a paperclip maximizer, not a values difference about how far to widen the circle of concern."]
'Cosmopolitan', lit. "of the city of the cosmos", intuitively implies a very broad, embracing standpoint that is tolerant of other people (entities) and ways that may at first seem strange to us; trying to step out of our small, parochial, local standpoint and adopt a broader one.
From the perspective of volitional metaethics, this would normatively cover a case where what we humans currently value doesn't cover as much as what we would predictably come to value* in the limit of better knowledge, greater comprehension, longer thinking, higher intelligence, or better understanding our own natures and changing ourselves in directions we thought were right. An alien civilization might at first seem completely bizarre to us, and hence scarce in events that we intuitively understood how to value; but if we really understood what was going on, and tried to take additional steps toward widening our circle of concern, we'd see it was a galaxy no less to be valued than our own.
From outside the perspective of any particular metaethics, the notion of 'cosmopolitan' may be viewed as more like a historical generalization about moral progress: many times in human history, we get a first look at people different from us, find their ways repugnant or just confusing, and then later on we bring these people into our circle of concern and learn that they had their own nice things even if we didn't understand those nice things. Afterwards, in these cases, we look back and say 'moral progress has occurred'. Anyone pointing at people and claiming they are not to be valued as our fellow sapients, or asserting that their ways are objectively inferior to our own, is refusing to learn this lesson of history and unable to appreciate what we would see if we could really adopt their perspective. To be 'cosmopolitan' is to learn from this generalization, and accept in advance that other beings may have valuable lives and ways even if we don't find them immediately easy to understand.
People who've adopted this viewpoint often start out with a strong prior that anyone talking about not just letting AIs do their own thing, figure out their own path, and create whatever kind of intergalactic civilization they want, must have failed to learn the cosmopolitan lesson. To which at least some AI alignment theorists reply: "No! You don't understand! You're completely failing to pass our Ideological Turing Test! We are cosmopolitans! We also grew up reading science fiction about aliens that turned out to have their own perspectives, and AIs willing to extend a hand in friendship but being mistreated by carbon chaunivists! We'd be fine with a weird and wonderful intergalactic civilization full of non-organic beings appreciating their own daily life in ways we wouldn't understand. But paperclip maximizers don't do that! We predict that if you got to see the use a paperclip maximizer would make of the cosmic endowment, if you really understood what was going on inside that universe, you'd be as horrified as we are. You and I have a difference of empirical predictions about the consequences of running a paperclip maximizer, not a values difference about how far to widen the circle of concern."
"Fragility of Cosmopolitan Value" could denote the form of the [ Fragility of Value] / Complexity of value thesis that is relevant to intuitive cosmopolitans: Agents with random utility functions wouldn't use the cosmic endowment in ways that achieve a tiny fraction of the achievable value, even in the limit of our understanding exactly what was going on and trying to take a very embracing perspective.