Arbital claims are significantly more useful* when they are fairly well-specified and unambiguous**

https://arbital.com/p/73s

by Eric Bruylant Dec 23 2016 updated Dec 23 2016


[summary: * At least 30% more valuable to people sharing models.

** Not lojban level, but with some thought put into possible interpretations and clarifying wording.]

* At least 30% more valuable to people sharing models.

** Not lojban level, but with some thought put into possible interpretations and clarifying wording.


Comments

Alexei Andreev

Sometimes ambiguous claims can be good too. Just to get a quick sense of where people are at. And for some claims, it might be really hard to operationalize them. Like this one, "At least 30% more valuable to people sharing models" doesn't make much sense to me.

Eric Bruylant

Yep, there's at least high variability. Especially if the things it could be taken to mean are things people generally have similar credence for.

And, nods, this was partly a test of trying to disambiguate a claim, and I found it harder than expected / think I did not do very well. Maybe just words would have been better rather than numbers, and more of them. Or maybe doing a simple version and having other people see where it was ambiguous rather than trying to clarify in a vacuum is easier?

Satvik Beri

I think a good litmus test is "could two people both strongly agree (or strongly disagree) while actually holding opposing views?"

I also think it makes sense to err on the side of overly unambiguous claims, at least initially: the more restrictive you are, the easier it is to create good discussion norms.

Andrea Gallagher

If claims are primitives, then all the interesting conversations will be at a parent level, which will need to stitch claims together to make an argument and form a perspective. I think many of the claims I'm seeing now are not actually primitives, and really need discussion around them to hash out the meaning.

I would love to see some mechanism to break a claim into both it's definitions of terms and supporting arguments (cruxes, if we want to use that term).

Timothy Chu

This doesn't seem like a controversial of a claim (be specific and not vague is one of the most timeless heuristics out there), but does seem worth highlighting.

I would like to add that my favorite claim so far ("Effective Altruism's current message discourages creativity") was not particularly well-specified. ("creativity" and "EA's current message" are not very specific imo).

Ted Sanders

(My first comment on Arbital. Hopefully it contributes.)

As someone who has traded on prediction markets for years, I agree with the sentiment.

Unfortunately, this claim itself seems really ambiguous. I voted neutral because I'm having a difficult time evaluating what the claim means. I appreciate the attempted clarification of 'at least 30% more valuable to people sharing models', but it leaves me confused. How is value measured? How would I be able to distinguish 20% more valuable from 40% more valuable? And who are these people sharing models? When and where are they doing their sharing?

I think we all agree that language will always have some wiggle room for uncertainty and interpretation. But in this particular case, I have no idea how to distinguish worlds where this statement is true from worlds where this statement is false. That's why I voted neutral.

I wish I could give a more constructive suggestion of how this claim could be reworded. I've spent a few minutes thinking about it but I don't have anything great. If anything, I'd remove the first asterisk.