For mitigating AI x-risk, an off-Earth colony would be about as useful as a warm scarf

https://arbital.com/p/off_earth_warm_scarf

by Eric Rogstad Dec 22 2016 updated Dec 22 2016


H/T to Eliezer Yudkowsky for "warm scarf"


Comments

Eric Bruylant

Neat, I'm a contrarian. I guess I should explain why my credence is about 80% different from everyone else's :)

Obviously, being off earth would provide essentially no protection from a uFAI. It may, however. shift the odds of us getting an aligned AI in the first place.

Maybe this is because I'm taking this to mean more than most, I only think it helps if well-established and significant, but by my models both the rate of technological progress and ability to coordinate seems to be proportional to something like density of awesome people with a non-terrible incentive structure. Filtering by "paid half a million dollars to get to mars" and designing the incentive structure from scratch seems like an unusually good way to create a dense pocket of awesome people focused on important problems, in a way which is very hard to dilute.

I claim that if we have long enough timelines for a self-sustaining off-earth colony to be created, the first recursively self-improving AGI has a good chance of being built there. And that a strongly filtered group immersed in other hard challenges with and setting up decision-making infrastructure intentionally rather than working with all the normal civilization cruft are more likely to coordinate on safety than earth-based teams.

I do not expect timelines to be long enough that this is an option, so do not endorse this as a sane use of funding. But having an off-earth colony seems way, way more useful than a warm scarf.

I would agree with:

Alexei Andreev

Sounds like we could capture most of those wins via, for example, Our community should relocate to a country other than the US.

Also, it's not at all obvious to me that the kind of people who would be working on designing AGI would go to Mars. I think the filter criteria would end up selecting for something else. (E.g. Elon Musk said he wouldn't go to Mars until it was a very safe trip.)

Eric Bruylant

It's in the same direction, yea. Even if relocating on earth captured all the wins (I would guess in most scenarios not, due to very different selection effects), that is way better than a warm scarf.

I don't expect the very early colony to be any use in terms of directing AGI research. The full self-sustaining million person civilization made mostly of geniuses version which it seeds is the interesting part, but the early stage is a requisite for something valuable.

Yea, that's not obvious to me either. It's totally plausible that this happens on Earth in another form and we get SV 2.0 with strong enough network effects that Mars ends up not being attractive. However, "better than a warm scarf" is a low bar.

If this claim is clarified to something like "For mitigating AI x-risk, an early-stage off-Earth colony would be very unlikely to help", I would switch.

More general point: I feel like this claim (and all others we have) are insufficiently well-specified. I have the feeling of "this could be taken as a handful of claims, which I have very different credences for". Perhaps there should be a queue for claims where people can ask questions and make sure it's pinned down before being opened for everyone to vote on? Adds an annoying wait, but also saves people from running into poorly specified claims.

Oh, neat, I can use a claim. This is fun. Arbital claims are significantly more useful* when they are fairly well-specified and unambiguous** A clarification period for claims is net positive for Arbital

Paul Christiano

I don't think the existence of such a colony would directly mitigate AI risk, but it could help in the same way that e.g. improved governance or public discourse could help. I think that over the long term, off-Earth colonies have a significant positive expected effect on institution quality (analogously with European colonization of North America). And "warm scarf" sets the bar low.