CFAR should explicitly focus on AI safety

https://arbital.com/p/6wx

by Stephanie Zolayvar Dec 16 2016


The Center for Applied Rationality has historically had a "cause-neutral" mission but has recently revised its mission to partly be focused on AI safety efforts in particular.


Comments

Anna Salamon

I want a wrong question button!! :/

Anna Salamon

CFAR should be about "Rationality for its own sake, for the sake of existential risk". Which is totally different. I just, um, haven't figured out how to say the actual thing clearly. Help very welcome.

Eric Rogstad

In other words, promoting this claim as worded, is misleading?

Anna Salamon

Uh, well, it's hard to reply-to, or something? Like, it wants to jam the conversation into questions about whether the claim is "true" or "false", instead of on questions about what is meant by it or what 3rd alternatives might be available or something?

Eric Rogstad

I'd be interested to know if you find yourself having that feeling a lot, while interacting with claims.

If it's a small minority of the time, I think the solution is a "wrong question" button. If it happens a lot, we might need another object type --something like a prompt-for-discussion rather than a claim-to-be-agreed with.

Timothy Chu

Addressing the post, a focus on AI risk feels like something worth experimenting with.

My lame model suggests that the main downside is that it risks the brand. If so, experimenting with AI risk in the CFAR context seems like a potentially high value avenue of exploration, and brand damage can be mitigated.

For example, if it turned out to be toxic for the CFAR brand, the same group of people could spin off a new program called something else, and people may not remember or care that it was the old CFAR folks.

Connor Flexman

Along with "Growing EA is net-positive", anything with a large search space + value judgment seems like it's going to have this issue.