Hacker News new | past | comments | ask | show | jobs | submit login
DewDrop – A Formal Language for Social Networks (github.com/neyer)
51 points by fiatjaf on July 7, 2015 | hide | past | favorite | 24 comments



I have no idea how this will turn out, but i have some intuition. Suppose you shorted bitcoins in 2010, selling 10,000 bitcoins you didn't for $0.10 apiece. You'd earn yourself a profit of $1,000 - and if you didn't pay them back, you'd be $3,000,000 in debt today. I really think this kind of network could change the world. If you defect against people trying to make the world better, by trolling us - you are standing in the way of human progress. We are building a record book that essentially blocks trolls from the public discourse - is trolling that group of people really a good idea?

man what


I can't stop laughing at this paragraph. Sometimes hackers will build something that is just so detached from what real people are willing to do it's amazing -- but this document takes it to a whole new level.


I have a huge problem with the word troll. Call me oldschool, but to me a troll is someone who claims to believe something when they really don't, and that just can never be proven either way. So why even go there? Trolls just like anyone else make statements, those statements are either off-topic or can be dealt with. If they're very aggressive, or just wrong in their claims, there is no need for the additional charge "troll". "Don't feed distractions", "don't feed negativity", "don't waste time on this, it's red herring because [insert solid argument here]", having rules for accepted behaviour, those and more all work perfectly fine, and they have the added benefit of not making someone an unperson with the word "troll".

The author(s) really just could put two things they said in separate places together:

> Mob mentalities seize hold of "the villian of the moment", and no person has any incentive to stick up for someone who is the 'bad guy du jour' of the crowd, regardless of whether or not all claims are merited.

and

> With dewDrop, those making the baseless charges can just be denied, with the subject issuing a DISTRUST statement. Anyone who trusts the subject can now immediately know to distrust those issuing the baseless claims - as well as anyone who trusted them.

... surely you can see how the combination of those two things could be a problem, right? I mean, what is a "baseless claim", who gets to decide that, and how?

I've been baselessly called a troll before simply because my claims were NOT baseless, and I asked others to back up their claims about me. And the best bit is, the FB page admin let that thread exist as long as someone else calling me a troll had the last word; but when I replied, insisting that just calling someone names does not constitute addressing what they said, and that my points stand, the whole thread got deleted like 30 mins later, and I had to fish the whole comment thread out of a browser process dump with a hex editor because I was so dumbfounded.

In my eyes then and now, I wasn't trolling, I was throwing pearls to pigs; I posted a "wall of text" which I had indeed thought about quite a bit in a forum where 99% of all responses to anything ever posted is just "you're doing a great job, keep it up, it would be so great if everybody was aware that you're just so great". Which is not even something I disagree with, I just wanted to add to that :/ But oh well, I was banned from posting too, though that I only realized weeks later when I pondered wether to translate something for that page, because they asked for volunteers.

So.. sure, I can flag a whole page with ten thousands of likes as "they all suck at reading comprehension, admin may even be dishonest", but where would the fact that I can back my claim up, and they can't, that I'm the one fishing that stuff out of memory which they swept under the rug, fit in there? What if I don't want to flag a whole page? What if I think someone is absolutely wrong about topic A, or for some reasons unable to learn, but has no stake in topic B, or is intellectually honest about it even though they do?

Sorry for rambling, but the anecdote I described actually did hurt me a bit when it happened. It's the discussion equivalent of getting killed by a drone without trial, simply not cool. Now imagine one "trusted admin" being able to simply delete the voice of people from this vague group of everybody representing "human progress"... ugh!

It's tricky over the web, but in person, the idea is generally, the more healthy and sane you are yourself, the easier you notice the bad "vibrations" others send out. It's just we usually are all over the place or unhappy ourselves, but the less you are, the easier you notice the patterns and disguises around you. And while we don't have pheromones and body language and a lot of other things over the web, I do feel something more limited but similar applies there too, it just can't be formalized easily and tattooed to people's foreheads or user ID, just like we can't do that with character, sanity, intelligence, knowledge, and all the other things that go into the one word "trust".

"I trust this person" is just one simple word for something you could not fully describe in a book. I trust various people to various degrees, each relation is completely unique, and I cannot easily or at all transfer my experiences with a person in one area to another area. I would also never say "Oh, you need a (dis)trust relationship with person X you never met? Here, just copy mine". It's like I wouldn't make a copy of my toothbrush for them, I'd rather help them get their own. Of course that's not entirely true, we gossip and criticize or laud others in their absence all the time, and we take that into account; but I would rarely let that keep me from getting to know a person, at least from even getting a first impression of my own. I know people do that, but I consider it harmful.

If anything, the very idea that someone would rather talk about the person who said something, than what they said, would count against them in my books. I'd say "the" (any) mob itself is standing in the way of its own progress, the appeal to authority or current consensus, sometimes expressed in just numbers, and always coming with labels. To put a point on it, the problem is that too many people can't or don't want to think for themselves or examine evidence, not how to do it for them (kinda like IMHO the problem is also that people don't configure the software they use or read manuals, not just what the best default setup or tutorial would be). Tools can help people reach maturity, but they cannot replace maturity, which to me implies such things as critical thinking and climbing that "hierarchy of disagreement", instead of getting stuck in one of the lower floors.

Last but not least, either you require each physical human to have just one root identity, or you already have lost to anyone willing to make more than one account, if not hundreds. If it can be done it will be done, if it can lead to influencing what people vote or buy, doubly so. This already happens, sure, it would just be more of that, but the more it was trusted the worse it could be abused.

If you read this far, thanks, and apologies :)


Neyer/dewdrop can I think be combined with neyer/respect, which I think solves the problem of sockpuppets fairly well by normalizing ones respect vector. such that someone making many sockpuppets should not grant them extra influence.

In addition, while in many cases, the source of a line of reasoning is generally irrelevant (it would be just as useful if it was generated by a random number generator) (these lines of reason are I think those regarding things which can be known a priori).

However, there are some claims for which it is either impossible or impractical to express/verify such an argument.

For example, I could not present to you a logical argument that shows the current temperature where I am.

If I had a thermometer with me, I could tell you, or even send you a picture of the thermometer.

But I could not prove it.

If a random data stream produced such a picture, you would have no reason to believe that the picture accurately represented the current temperature of my surroundings.

For things such as this, (I think these are those things which can only be known a posteriori , and which one has not personally observed) one has to take into account the source of the claims, and so one generally has to consider how much one trusts those who are making the claim (or, how much you trust them in the given context, which a single number perhaps would not be entirely sufficient, but it seems to be at least a partial solution)


long post, i'll do my best:

> what is a "baseless claim", who gets to decide that, and how?

the idea is that everyone int he system can decide for themselves what is baseless and what is not.

every single statement, every single claim can be evaluated by all parties to decide, is this true, is this false.

there are plenty of things people have claimed publicly and then recanted - like, say the made up allegations of gang rape (http://dailycaller.com/2015/04/06/uva-fraternity-to-sue-roll...) - the idea is that anyone who latched onto that ahead of time and said "this is true" gets that tracked and responded to later.


> every single statement, every single claim can be evaluated by all parties to decide, is this true, is this false.

Okay, but then where does this translate to people getting "tagged" as trustworthy or not, etc.? What effect does that have on statements they made? Even if it's up to the users -- that text really makes it seem like tagging people as trolls just because X trusted people tagged them so as desirable or useful.

To me there are kind of two ways to look at trust (or affinity in general), as something by which we weighs actions or opinions of another person ("if enough people I trust label X as Y, that influences how I view them"), or as a short-hand for how we feel about the individual decisions of another person ("all these people did ask for evidence when most people were ready to lynch X for Y, which increases my trust for them"), maybe using that as the first assumption when faced with new things they do or have opinions on, but generally just being a "bonus score" after the fact, if you will. In practice it's a bit of both.

But if you have to follow the chain of arguments anyway, or the reasons why someone is labeled as something, what use is the label? And if you don't follow it, and just "assume there is a good reason for the label", you get fun stuff like a 17yo girl telling a boy she's 18 like he is, them having sex on her initiative, and him being labeled a sex offender for 20 years even though she pleaded for him to be left alone. It's not "sex offender for having lied to by a girl with stuck up parents who was 17 when he was 18", it's just "sex offender". The same with "troll", "(not) reputable", etc. The truth is as fine-grained as the actual course of events, and while labels are sometimes useful, it can be dangerous when they develop a life of their own so to speak, in our imagination.

> anyone who latched onto that ahead of time and said "this is true" gets that tracked and responded to later

By those who came to the conclusion. Likewise, everything who says "hah, it's not true", get "responded to" by everyone who insists it's true regardless. So what was gained?


> Incentivize people to admit they have erred - a public record of you apologizing shows you try to right your wrongs

But, in any objective discussion, it should not matter who is saying something, nor should his/her reputation matter. We should stick to the facts and leave "ad hominem" out of this.


Very good point, I didn't see that at first, although I'm thoroughly skeptical. My guess is, the author comes from a context where a high level of social control is customary and desired. My taste is just the opposite.


> a high level of social control is customary and desired.

a high level of social control is customary because i live in the world as it is now. but that is NOT the world i desire to live in.

the way things work now, social control is implicit and heavily enforced. by making it explicit, you can reduce the extent to which it's enforced - and by speaking in formal languages, you can show that anyone who makes a claim gets credit for being right, regardless of their station in life or how big their audience is.


Except that there are just too many areas where “being right” is in the eye of the beholder. How would you make sure that your formalism is only used in areas where “objective truth” can be established? It could just as easily be abused for crusades.


> in any objective discussion

do these _ever_ exist?

especially in politics, identity matters.

yes, of course who is making a claim has no bearing on whether it is _actually_ true, but when we evaluate claims, we evaluated them for their _likely_ truth.

the world you paint - where "who says it doesn't matter" - that is not the world we live in now. if i contradict a famous economist on TPP (http://markpneyer.me/2015/06/15/tyler-cowen-change-your-mind...) unless i have an audience, nobody cares.

the best way to "stick to facts" and "find the truth, no matter who says it" would be a system like this, which actually lets us say, provably "nobody predicted this publicly before i did"


That only works when the cost of verifying facts is small, which is almost never the case. That's why humans developed concepts of trust and reputation - heuristics for making evaluations that would be prohibitively expensive otherwise.


The introduction points rightly to a broad range of problems in our internetted communications. However, I highly doubt that any formal or automated system can help with that. For starters, how would you force users to use it in a consistent, or even correct way?

Even if we assume it just works, it's only good for people who want to live in a village where they're always the same person to everyone else. I don't want that. I want to live in a city where 3 blocks down the street they never even heard my name or saw my face. I want the freedom to be a “good citizen” in one circle and an absolute a-hole troll in another. And if that should violate anyone's minimum requirements for bigotry it's really none of my concern.

[Edit: Wording]


I do not think many individuals have a (personal) benefit by adhering to these rules.

Also, people like to keep implicit grey areas in their opinions. To have space for 'I did not mean it that way'. There is a small politician in every one of us ;-)


the latest version of this, i'd like to add certainty weights, so you can express the numerical extent of your confidence in your claims.


Speak for yourself.


Cool idea, but to data mine human communication, software must come to understand their language, rather than trying to get people to change their language #ddv2 DISAGREE https://github.com/neyer/dewDrop


This reminds me of a funny rule, I heard some years ago (in the context of the Semantic Web): If you allow people to supply their data, they either won't do it or they'll do it wrong. #ddv2 DISARGEE https://github.com/neyer/dewDrop


This sounds similar to FOAF -- https://en.wikipedia.org/wiki/FOAF_(ontology) -- and more generally similar to concepts from the semantic web. Why not build on these existing formats?


If you put in rules to follow people simply won't. Especially trolls. That's kind of part of their definition, isn't it?

But what people forget is that you can communicate without text boxes as well. There are already social non-verbal communications in place.

Simple examples are liking something, sharing it, seeing a group post or chat message (with or without responding). Being online or not is also a way to say you want to communicate or not.

It is possible to force people into a non-textbox communication by giving them buttons instead. Sometimes both options (text and button) already is enough for people to use the button because it's simpler.

There are also experiments to fight trolls by only giving specified communication buttons instead of text boxes, e.g., in the online card game Hearthstone.

One problem you can't fight that way is the problem of interpretation. In the example of Hearthstone the winner of a card game might say "Thanks" hoping to end the game with a polite statement, while the opponent who lost might interpret it as "haha, look at how I beat your ass, noob".

Spamming and Trolling is also still possible, because you can make anything annoying by doing it too often. E.g., someone you just added to your FB friendlist goes through your history and likes every single status message, uploaded picture and shared link you posted.


This puts way too much weight into offense inherently being negative. All offense is is something striking way outside your sensibilities, and you not even attempting to process what was said. This culture of punishing people for having differing opinions is a genuine road to hell.


author here - i was wondering why this got picked up.

i've been working on the respect matrix (https://github.com/neyer/respect) which is a simpler version that gets the kernel of the idea present in dewdrop.


This is very insightful and a great bit of innovation. I believe that the biggest impact of this would be in internal projects and collaborative groups; managing the dialogs and controlling noise - building consensus and shoring up what is known to be true (or false). It's very difficult to build a pyramid of knowledge in a corporate today.


This could be done right now in Twitter. People would just tweet these statements and a server would calculate public ratings and statuses.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: