If I find myself being convinced by the argument, does that mean I should adopt Epistemic learned helplessness in response or not adopt Epistemic learned helplessness in response?
I'm partly being facetious, but it would be interesting to try to use a non-argumentative approach to persuade to use one, I'm just not exactly sure what that would look like.
As a kid I remember being told "brush your teeth in circles, it's better" and thinking "I'm sure something else will be recommended in ten years so I'm just going to go back and forth horizontally like I want to" and sure enough circles clean more plaque but push your gums up so downward flicks were recommended. Maybe a dentist can weigh in on current tooth brushing practice... That said, was I better off with my inferior method? That's kind of the crux of it. If we're blown about by every plausible theory, is that better than being blown about by nothing? It seems like this is a nested Bayesian decision problem that needs to incorporate switching costs, which I'd guess for some things are trivial and for other things are quite large.
If you try to rapidly see and comprehend an ultimate truth head-on, you pay the price of only being able to think about and express such truths in forms which look insane to outsiders(terrorism, fascism, conspiracy theory, etc.) and with correspondingly drastic consequences to the well-being of you and others.
But if you want to live a healthy, ordinary life, you shade yourself from some of those truths, knowingly or not. You express them cryptically and meditate on them in a deliberately obfuscated way, or dismiss them for the moment. You do not allow that knowledge to throw you around. That doesn't mean that you don't act on the knowledge at all(which is how epistemic learned helplessness is presented here), so much as it does that you can proceed to find paths to acting on it that are indirect and do not trip your insanity alarms.
Culture itself demonstrates this property. It changes rapidly enough, but it does so gradually, most of the time. Reality in the year 1985 meant something recognizable yet quite a bit different - in food, fashion, and music, in cultural attitudes, politics, and should-you-says, and in everyday usage of time and activities. Some of the things that were taken seriously then are dismissed now, and some of the things that were laughable then are taken very seriously now. And in the 30 years that transpired, many of the same people are still around, but nearly all have changed their status and outlook in some way. Many have led reasonably content lives while doing so. Isn't that amazing?
I've thought down these paths a lot, and I came to similar but different conclusion. It's not simply that I don't trust argument - I weight arguments with evidence.
If you take the simulation argument for instance - if you're contemplating the probability that we are living in a lifelike simulation you must factor in the number of lifelike simulations you have seen.
>> you must factor in the number of lifelike simulations you have seen
Right. I take the position that there are problems that are unsolvable by technology; simulating the universe seems likely to be in that category. That position is fundamentally unprovable, but call it a hunch. Also, Occam's Razor argues against it, FWIW.
Yes the full simulation probability has to be something like the probability simulations are possible and the probability that simulations have been discovered and the probability that simulations are numerous and the probability that simulations are 'unaware' times the then (n-1) in n probability that you are in a simulation. The size of the (n-1) in n part of the probability is pretty irrelevant vs the other parts.
It's hard to make a compelling argument for why a particular problem will never be solved. However, I think it's a reasonable supposition that such unsolvable problems exist. Resurrecting the dead, for instance. In order to faithfully simulate the universe, you'd apparently need complete understanding of the laws of physics. And, you'd probably need a tremendously large (and presumably very power-hungry) computer to run the simulation. There's no particular reason to think that physics will ever be completely understood, and the computer might need more power than all the stars in the universe can supply.
Again, it's extremely difficult to argue compellingly for why something can't be solved: even if it seems to be ruled out by known laws of physics (like traveling faster than light), you can always argue that new laws will be found someday that will be more favorable than the current ones. That argument never terminates, because you don't know what you don't know.
Interesting that you still accepted without question that brushing at all was important. You didn't simply say "well then, who can know?" and quit brushing altogether.
I've been thinking lately that "do the best that your intuition can" is the optimal way to solve these sorts of mini tasks. That way, when your intuition is right about 10,000 things in your life, you have a bit of leeway with the other 10,000 it's wrong about.
I'd imagine that this really only works as long as your intuition is in constant check, though.
I like this heuristic, since for things for which it is effective, intuition basically _is_ rationality, on a species-level timescale. The problem is the things for which it is not effective. The other problem is knowing which kind of thing you're dealing with, since determining that requires intuition.
I'm partly being facetious, but it would be interesting to try to use a non-argumentative approach to persuade to use one, I'm just not exactly sure what that would look like.
As a kid I remember being told "brush your teeth in circles, it's better" and thinking "I'm sure something else will be recommended in ten years so I'm just going to go back and forth horizontally like I want to" and sure enough circles clean more plaque but push your gums up so downward flicks were recommended. Maybe a dentist can weigh in on current tooth brushing practice... That said, was I better off with my inferior method? That's kind of the crux of it. If we're blown about by every plausible theory, is that better than being blown about by nothing? It seems like this is a nested Bayesian decision problem that needs to incorporate switching costs, which I'd guess for some things are trivial and for other things are quite large.