1. 2

  2. 1

    I have mixed feelings about this one.

    I have encountered many people who seem paralyzed by their uncertainties. They hit a question (such as “what methods can a society use to break up monopolies?”) and they are pretty sure that they won’t be able to generate the right answers, and so they generate no answers.

    This is good! This is correct! I keep trying to get people in “rationalist” spaces to understand that this is good and correct!

    And this may be a better failure mode than the failure mode of someone who has too much confidence and self-assuredness, who makes up a bunch of bad answers and then believes them with all their heart.

    It certainly is!

    Someone with Confidence All The Way Up, though, can achieve the third alternative: generate a bunch of bad answers, understand why they’re bad and where their limitations are, and use that information as best they can.

    Can they? Really? This is the part I’m skeptical of.

    You don’t need to ground out all your beliefs and justify all your reasoning steps before you can start moving. You don’t need to have plans for every contingency before you can act. You don’t need to be highly confident in your analyses before you present a model. If you sit around awaiting certainty, you will be waiting a long while.

    I think this is wrong. I think that a better conclusion is: If you sit around awaiting certainty, you will… go do something else, where there’s more certainty.

    And… what’s the problem with that, exactly? That you won’t have as many grand successes? Perhaps; but also, not nearly so many catastrophic failures.

    Anyway, that aspect of things aside… I often dislike people with this sort of “aura of confidence” (for the reasons I outline above), but also, I often am one of such people; and then it is terribly frustrating when I get pushback on my insistence to just go ahead and do things, which, yes, might fail, but who cares? And there, I think, is the difference: this attitude, this “confidence all the way up”, where you acknowledge all of your uncertainty and all of your limitations and then go ahead and act, and do what seems best—this is fine… when the cost of failure is low. But when it’s not… even if you’ve done some alleged utilitarian calculation, and the expected utility blah blah deceptively easy formalisms for really-difficult-to-formalize values blah… then maybe this attitude isn’t so great. Maybe it’s best to throw up your hands, engage full “epistemic learned helplessness” mode, and do nothing. At least when it’s not just your own fate you’re gambling with, but the fates of others as well.

    1. 2

      Sometimes the cost of failure is high but you don’t really have a better path forward. Other times the expected value of playing high stakes is fairly good. Obviously any sane person will take reasonable risk/reward ratios with their actions, but I think that a rigorous knowledge of where you’re strong and where you’re limited contributes massively to one of the core pieces of effective action: know thyself.

      Certainly it helps me get away from trying to do things which are truly hopeless.

      1. 1

        … I think that a rigorous knowledge of where you’re strong and where you’re limited contributes massively to one of the core pieces of effective action: know thyself.


        One of the things I was trying to point at, obliquely, was a preference over the form of the probability distribution over outcomes (specifically, its skewness), which violates the VNM axioms (which require lack of any preference over any aspects of such distributions except their expectation). (I plan a blog post about this, soon.)

    Recent Comments