Hacker News new | past | comments | ask | show | jobs | submit login

A trust graph is essentially an ad hominem though.

What if we instead could provide the tools to establish a soundness graph. Evaluate arguments on their own merits rather than their source.

Even Arguments based on false premises can be sound. Just as arguments can be based on true premises and still be unsound. Helping people identify which is which should at least raise the quality of disagreements.




Even Arguments based on false premises can be sound. Just as arguments can be based on true premises and still be unsound. Helping people identify which is which should at least raise the quality of disagreements.

When its premises are false, an argument is always unsound. But it can still be valid. A sound argument is one that is both valid and has true premises.

About builiding a soundness/validity graph, I've dabbled with building a webapp for that (though the graph is more implied than visual). It's still very basic, but if someone has ideas where exactly it should go or how it could be engaging to a community of critical thinkers, please contact me.

https://arguably.herokuapp.com/

http://github.com/gregoor/arguably


Thanks for the correction. Will certainly have a look at the app

My own thinking is that it has to be less of an app and more of a protocol/federated thing augmenting existing channels. Think something like a github bot doing automated reviews with an SO-like community of meta-data authors annotating news articles and such


That "soundness graph" is the gold standard of reasoning that we're consistently not achieving. Probably because it's too computationally expensive in general for our brains. So instead we use shortcuts.

As a shortcut, trust graph is actually pretty good. Consider this example monologue:

This guy says really interesting things about cars, but my good friend Sally the Car Mechanic says he's talking nonsense; she's an expert in the domain, so I'll approach the new guy with huge dose of scepticism.

Note how trust graph implicitly takes care of known unknowns and unknown unknowns - I know little about cars, Sally knows a lot, so she's able to evaluate the situation better than me. Note also how it handles intent - Sally is my good friend, I trust her to have my best interest in mind, so I know what she's saying is her real opinion, and not e.g. trying to keep me as a customer of her workshop.

Intent is hard to judge, but is unfortunately very important when dealing with information that's not directly and independently testable (which is most of them, including especially conclusions drawn from testable facts). Trust graphs, or Evidence-based Ad Hominems™, are a very powerful shortcut for evaluating information.


In human to human connections it's probably a good proxy. I was thinking more of what would be possible with machine augmented reasoning.

Something like code reviews / peer review, but more broadly applied


As hominem is wrong for Boolean logic, fine for Bayesian reasoning.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: