> However, most of real-life is not as clear-cut. Deriving the truth of a statement may depend on multiple potentially faulty pieces of evidence which must be taken into account together. For this, one needs to assign probabilities.
This what the fuzzy people want you to believe. The logicians have a better answer. For this you need more context. E.g in programming you would add types, pre- and post conditions. And not this statement will be 85% true. As the current AI hype is pretending.
I'm all for using types, and pre- and post-conditions where applicable, but I don't see how they would be a useful replacement to the situations in which probabilities would apply. Could you elaborate?
To give an example where I think probabilities would be used: consider a recognition AI that should figure out who someone is. You have a phone, on which you have some photos of its owner, some voice recordings, and some text messages. For each of those, the AI can assign probabilities that e.g. my voice matches the recordings, my face matches the photos, and my writing style matches the texts. Then it could combine these into an aggregate estimate probability that the phone belongs to me.
How would you use types and pre- and post- conditions to solve this problem?
Oh, okay... are you referring to fuzzy logic where statements have a partial truth value?
I'm (mostly) referring to the case where the truth value is either true or false, but where you aren't sure, so you can say "80% probability this is your phone".
There are also cases where truth values aren't as clear cut, which I also mention, such as the question of whether or not something IS a chair. (Is a chair taped to the ceiling still a chair? Is a log I sit on out in the middle of the forest a chair? Etc.)
This what the fuzzy people want you to believe. The logicians have a better answer. For this you need more context. E.g in programming you would add types, pre- and post conditions. And not this statement will be 85% true. As the current AI hype is pretending.