Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Radiopaper – Troll-resistant public conversations (radiopaper.com)
346 points by evnp on April 29, 2022 | hide | past | favorite | 323 comments
Hi HN! We're a bootstrapped team of 4 and have been building Radiopaper for around 16 months alongside other full-time, part-time, and consulting jobs.

I wanted to highlight a couple of the unique characteristics of Radiopaper that may not be immediately apparent when browsing https://radiopaper.com/explore

* It's possible to interact with Radiopaper entirely by email, and never log-in interactively. The notification emails contain context that explains that if you reply to the email, your message will be published on https://radiopaper.com

* The key mechanism that makes Radiopaper different from other social networks, and more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment. You can read more about this in our manifesto at https://radiopaper.com/about

The technical stack is a Vue/TypeScript app talking to an API backend written in Go, running on Cloud Run, and using Firestore for persistence, Firebase Auth for authentication.

Email processing is handled through the Gmail API hooked up to a Cloud Pubsub notification which triggers another Cloud Run service. Outbound emails go through SendGrid.

The whole stack "scales-to-zero", and on days that we have a few hundred active users, we're still under the free limits of Firebase Hosting, Cloud Run & Firestore, so this has allowed us to operate for a long time without funding or revenue. Our overall burn rate is around $40/month, mostly from the smattering of other SaaS offerings we use: Sentry, Mixpanel, Github & SendGrid.

Dave & I discuss our tech stack in a little more detail in this conversation: https://radiopaper.com/conversation/4PsvfxLX2Q5NHLBs8nuN

The team (myself, daave, davidschaengold, youngnh) will be around to answer any questions!




About 14 years ago, I also tried to tackle the public conversation troll problem and created a forum that got quite a bit of traction. While Radiopaper takes the carrot approach ("write things people want to engage with"), I took the stick approach: "anyone can ban anyone else for a non-trivial amount of time, but you must reply to their post, and people will see that you banned them."

It was a probabilistic method to filtering out the trolls, and it worked well. Trolls wanted to ban, but they themselves became the more-frequent targets of bans. Eventually, they would get frustrated with the bans and leave. The people that remained, stayed civil to not become a clear target of a ban. There would be brief chaotic behavior, but it never lasted. Cooler heads always prevailed.

Radiopaper mentions that there is a lot of room for exploration in this space, to try different models for creating emergent social behavior and productive conversations. I hope we see more of it!


> "anyone can ban anyone else for a non-trivial amount of time, but you must reply to their post, and people will see that you banned them."

This, I like. It would be interesting to experience this system.

My own personal policy on HN is: never downvote anyone for the content of their comment, prefer replying; if downvoting then it must be for extremely poor presentation, not content. That said, I don't tend to upvote or downvote a whole lot.


On HN, it appears the downvoting is about those with the power disagreeing with a different or unpopular opinion, so is of little deterrence to trolling. The troll can be typing something that is popular or is popular, but is specifically targeting or bullying other users.

The troll behavior isn't punished, unless a moderator steps in, where arguably it would have reached the point of being quite obvious or outrageous. So on HN, it seems that if a user has an issue with trolling, they would have to e-mail HN about it. And here again, it would likely need to be something more on the continual, obvious, or outrageous side for actions to be taken.


The moderation team here (is it just dang?) is pretty effective and I think succeeds. I've been corrected for relatively minor stuff before, and it was a helpful reminder to remain civil and engaged in good faith conversation.

I guess what I'm getting at is I have had a completely different experience and I'm wondering what lead you to feel the way that you do.


Hacker News is interesting because I think it's also pretty troll resistant through virtue of... being boring to non-trolls, I think. The web 1.0 design scares off a lot of the teenagers.


To troll on HN, you just need to operate within some constraints. Two criteria need to be satisfied.

First, to prevent your troll from being greyed out, you need to be in alignment with the dominant Hacker News zeitgeist. Downvotes don't matter so long as enough people counter them with upvotes.

Second, to prevent being flagged, you need to craft your message in such a way that it could possibly be interpreted as well meaning. "Assume good faith" is a loophole to be exploited: "suck it bitch" overt hostility won't fly, but since "I'm just asking questions"-style disingenousness often can't be differentiated from honest inquiry, it makes it through the filter.

If you can master those techniques, you can troll any unpopular outgroup on HN without limit.

You can't troll the dominant faction without getting downvoted into oblivion, so the dominant faction believes trolling doesn't happen, but as for whether trolling of outgroups is happening...

... we can't say, as that would mean doubting the good faith of those "just asking questions". Nevertheless, we can observe that certain demographics who frequently get the "just asking questions" treatment are severely underrepresented on HN.


Due to the tree structure of comments though, those people can quickly and easily be side-stepped. I'd also suggest that the "just asking questions" behavior performs relatively poorly here because there is likely some one who is going to be able to thoroughly and resoundingly answer. The point of such behavior is to impact the audience and it can be successful because its easier to ask hard questions (with provocative insinuations) than it is to answer them, but it backfires badly when those questions are answered.

Out of curiosity, what 'outgroups' are you talking about? Some social groups are simply unpopular because we consider their views and perspectives to be disgusting or at least worthy of social censure; open bigots, people that don't wear shoes inside convenience stores. I ask because I've started to associate vague references to 'outgroups' with 'horrifying bigots that identify as conservative' and I wouldn't want to unduly lump whatever group you're talking about in with them.


I've deliberately used the generic term of "outgroup" because I think that the loophole present in "assume good faith" can be exploited to drive away any outgroup.

The "outgroup" could be "horrifying bigots". It could be also be "economically right-leaning thinkers in a forum where the zeitgeist leans left".

What I'm personally concerned about the most though are demographics which are underrepresented in tech and even more underrepresented on HN, most obviously women but speculatively also Black people, Hispanics, and so on.

(This thread has over 300 comments so you may not have seen my other posts, but if you want further clarification you may wish to seek them out.)

I disagree that the tree structure of comments provides a sufficient remedy for pervasive low-grade hostility, and I posit that the problem of de facto exclusion of outgroups under "assume good faith" is unsolvable.

Concretely, tropes such as women being uninterested in programming, or having it easy thanks to equal opportunity programs, or bearing responsibility when they are sexually harassed, are considered legitimate under "assume good faith" on HN and so get discussed endlessly. Such discussions have the effect of driving women away and diminishing their participation, in anonymous-optional internet forums even more than in the office where freewheeling discussion of such topics would constitute a hostile workplace and be subject to EEOC sanction. The result is similar to discriminatory exclusion from a professional society, even though the mechanism is not a formal barrier but instead the fostering of an unwelcoming environment.

To be clear, I don't believe in the slightest that this outcome is what HN moderator dang would wish for. I think he's done amazing work to guide discussion in productive directions, and HN is noticeably more civil than it was a few years ago.

But I also believe that the "assume good faith" model has severe, underappreciated, deeply consequential problems. I think that a comment moderation system like RadioPaper or Gawker/Kinja, which allows outgroups a say in how much dreck they have to tolerate to participate, holds promise for avoiding those problems, and I would like to see how closely that model can approach the ideal of open debate and participation from a different direction.

Now if only RadioPaper had tech topic focus page, and I could use it like HN, lobste.rs, Slashdot, etc...


Pretty much sums it up, and is another reason my involvement with HN has decreased significantly over the last year or so. This mode of operation is gross to me.


What kind of opinions are you guys seeing get trolled/downvoted out?

The only things I've seen eat dirt like that are jokes, and comments that were worded in an inflammatory way / presenting opinions as unassailable facts (even if they were right in my opinion). Mainly when people insult the Node.js ecosystem or some flash-in-the-pan new programming language

I haven't seen any Twitter-style anger over "sea lioning" *, or Reddit-style castigation over anything even slightly unpopular

* where people are having a public conversation on a public forum, but act like any attempt to engage with them is invading their private space


You can click the timestamp of a comment, and then on the page you're taken to choose to flag the comment.


Anyone else noticing this is used as a mega downvote recently?

Saw a pretty civil disagreement in a thread where half the conversation was flagged dead. There used to be a vouch option but I guess I don’t have that anymore

Turn showdead on, you’ll start seeing it all the time (it feels like)


> There used to be a vouch option but I guess I don’t have that anymore

Yeah, what gives?


Personal takes on downvoting, or even rules, are... arbitrary? Up/downvotes cannot be controlled, it's individual opinions vs thousands of randos that each have their own reason to up/downvote. I can't think of a solution that helps rank the "best" and hides the "worst" comments, I'm used to lower-volume forums myself where replies would be in order of posting and unthreaded. An optional "upvote" system existed on the side, but I've ignored that for a long time now.


Isnt that that actual intent of of upvotes and downvotes. It was never supposed to be an agree or disagree button. Same on reddit, but its a lost cause there...


I think that dang has said that it's OK to downvote something you disagree with. There is a difference between "I disagree" and "this is dumb", but there's a very broad grey area.


I always get down-voted because I strike a nerve in some people. Chill, it is the Internet. Do not allow me to upset you, seriously.


What was the scale of your community, and how long did the forum last? I would be curious to hear such a system would perform against large numbers of bots or sockpuppet accounts. I think that techniques that work in forums may simply not scale to millions of users, when it actively affects the national conversation.

For example: every time a big political announcement is to be made or an event happens, the optimal game theoretic play is to ban them to prevent them from being able to set the narrative. And it just takes one person to do it. That's not hard at all: if you have millions of users, someone will do it!


I ran it for about 6 months iirc. It never got to the size where sockpuppets were a legitimate threat or IP bans insufficient. Occasionally people would switch wifi to ban the person who banned them, but I leveraged cookies where I could to try to detect that behavior. This was before browser fingerprinting and "evercookies" were widespread, or I probably would have used those too.

In your example, the person would have to have posted recently in order for you to ban them, since bans only worked on quasi-recent posts. But yes, if you made yourself into a target, someone would ban you, and it was accepted in the culture because it was fun. It was acceptable that everyone got banned frequently, but far less frequently than the trolls. I should probably have mentioned that all users were anonymous to each other.


You run the risk of an echo chamber with this method, though. Where like-minded groups of individuals gang up on others to keep opinions they find unpopular concealed. Which may be desirable depending on the goal of your platform or service. But, it is something to be aware you are causing.


Sounds like it worked? What happened with the project? Not enough mass of attention?


It was growing steadily but there was no clear way for me to monetize it. From a business perspective, I didn't know what I was doing. 4chan founder Chris Poole reached out to me at one point and we had a call (I was a bit star-struck!) to talk about it and some new project he was working on (I don't remember the name). But money was really tight for me and I couldn't afford to run it anymore, so I had to shut it down.


How much did it cost to run? Did you have any money coming in at all?


No I had no money coming in. I don't remember the cost tbh, but it was costing time and money, since I desperately needed to find work at the time.


> "anyone can ban anyone else for a non-trivial amount of time, but you must reply to their post, and people will see that you banned them."

Was that on a per-thread basis i.e. if I start a thread, I have the power to publicly ban other users who reply to my thread?


Yes iirc, but bans were throttled (N bans per hour), and by banning someone, you make yourself vulnerable to a ban, because you have to reply to their post.


I took a similar approach as radiopaper: everything you say is pre-banned, and only the person you talked to has the sole discretion to lift the ban. during the ban you cannot post again to the same person on the same topic. The other party cannot reply to you without lifting the ban. Once the ban is lifted, everyone else can see it and reply to it.


Smart. Tit for tat is often a winning strategy.


OK I would like to hear a whole lot more about your project!


> Eventually, they would get frustrated

the trolls! That's genius!


I would really like to try out this plaftorm.


A common discussion "technique" on reddit is to block the person you're arguing with right after your last message, when you feel you're done with the conversation. That way they will never be able to reply and you always get the last word. Unfortunately this feels like an amplification of that. However, I guess the dynamics will prevent the frustrating feeling when that's the game you're singing up for.

Intriguing idea anyway. Looking forward to seeing how it pans out.


This technique is not possible on Radiopaper. Once a conversation is published — which happens as soon it has messages from both parties — neither party can block the other party or unpublish the conversation (though you can edit your own posts, including deleting them).

Either party can keep adding messages to the conversation in any order, but only a back-and-forth is considered an update to the conversation as a whole, for the purposes of sending it to the top of the feed. That way, if a conversation turns sour, one person only needs to stop replying, and the conversation will gradually sink down the list into oblivion, even if new messages continue to be added by the ignored party.


How is that different from the parent-comment's point?

I mean, the parent-comment was saying that folks could get the last-word by replying-then-blocking. It sounds like folks on your platform could get the last-word by simply not approving the other-side's response.

To avoid one side getting a last-word in a feud, it'd seem like you'd need to ensure that both sides could eliminate the entire conversation should they not be satisfied with its ultimate conclusion, such that there'd be no last-words in any feuds as there wouldn't be any (published) feuds. Short of that, it'd seem like one party could end up getting in a last-word.


The difference seems to be that the other person gets the last word and anybody who actively seeks out the conversation will see it, it just isn't promoted? Because if I understand correctly, once someone has been approved once in a RadioPaper conversation, their subsequent posts in that conversation cannot be hidden.


> That way, if a conversation turns sour, one person only needs to stop replying,

Without actually trying it, this seems like a solid design choice.


The mods there love doing this. "Banned. Reason: Hate speech"

"Why was I banned? I didn't say any hate speech. I just stated my favorite movie."

"I hate that movie, thus it is hate speech because it promotes hate (something that I hate)." or "Read the rules, figure it out"

Sometimes then the mods ironically send actual hate speech in their reply, then block you. Or you reply to them, and replies to mods automatically unlock the nuclear admin ban missiles "for harassment." God forbid you quote their literal message in a report which then triggers the automated word filter bans which are disabled for some reason when it's mod messages.

I'm not really surprised though, since


Why does redit leave this anti-social loophole exploit open? Is it good for engagement? Seems like a win for the trolls and a net loss in terms of QoL for the good actors.


Blocking is sometimes necessary when someone is behaving as an abusive troll, or switches from normal conversation to hurling personal abuse.

Yes, it can be abused by people who think that having the last word is "winning". One way to address that might be to add "Fred blocked Sam" to a thread in which both Fred and Sam previously posted, so everyone knows what happened.


The feature is more commonly abused (on reddit) as an aggressive tactic by abusers. The person being "blocked" tends to be the target of abuse. This is because of some really bizarre oversights in how the feature was implemented:

Once you have been blocked, you cannot block back (because reddit no longer shows the user who blocked you as existing, while you're logged in). So if you want to harass someone, the best thing to do is to create an account and block them immediately.

Not only can a blocked person not respond to a blocker's comment or post, but they also cannot respond to any descendent of said post. Thus, on a small subreddit, or on popular posts in a large one, it's possible for a blocking peer to essentially "ban" you from the conversation at large.

By repeating this technique in the same subreddit, it's possible to gradually manufacture false consensus by preemptively silencing folks who would disagree with you. This user did an excellent study/proof when the new feature was launched:

https://www.reddit.com/r/TheoryOfReddit/comments/sdcsx3/test...


How about only a silent mute but no block? Then you can protect yourself if you don't want to see it, but to use it as a weapon, you have to convince them you've really muted them - and they might not believe you, so it's weaker. Just like old Usenet and mailing list's *PLONK* message.


The downvotes on this comment (at least at the moment) are astounding.


It's not a "loophole exploit", reddit simply gives the user arbitrary control of who they want to block.


Blocking is fine. Nobody should be forced to read anyone else's comments.

However, if a block also prevents the blocked person from replying to the blocker's replies, that creates terrible incentives for abuse. Whoever implemented it was only thinking about blocks used against genuine bad actors, with disregard to how blocks could be (and empirically are) used by bad actors.

The same thing happens all the time on Twitter, which implemented the same "feature".

As strix_varius pointed out in a parallel subthread, this creates a perception of false consensus. It may be, more than any other thing, what's contributed to the hyper-politicization of those platforms. The platforms themselves enforce that any sufficiently engaged user, through enough defensive and offensive blocking, will end up in an echo chamber.


Allowing people to decide who they're going to engage with is freedom of association. Platforms which allow unlimited brigading prevent unpopular subcommunities from holding conversations amongst themselves.

Hyper-polarization is not diminished when those who disapprove of a community descend en masse to interject and disrupt. Platforms which don't allow subcommunities to wall themselves off degrade the ability of those subcommunities to use the platform, and if the volume of disruptive posts is high enough, essentially deny them use of the platform by allowing them to be shouted down.

This applies no matter who the subcommunities are or what politics they espouse.


Communities can wall themselves off and be very selective in who they let in. That's not the problem.

Block-prevents-replies allows bad actors to unilaterally silence opposition so that nobody else can see what the opposition's arguments are. *In a community that appears to be public.*

daenz's system is interesting (anyone can ban anyone, but to ban them you must reply to the comment you're banning them for, and others can ban you in return), and it might converge to stability, but each new troll will disrupt some discussions until they lose interest in being repeatedly counter-banned.

I think any system to exclude bad actors and prevent brigading needs to be designed some other way, so that silencing dissenters requires some consensus and can't be done unilaterally or by a small cabal.


I would argue that it is, in fact, an exploit. Here's a proof in the wild:

https://www.reddit.com/r/TheoryOfReddit/comments/sdcsx3/test...


It used to be Reddit would let users block trolls, and then the user would never have to deal with seeing the troll posts ever again, even if they continued to reply.

The new way - blocking someone and restricting that person's ability to reply - is an aggressive UX.


Blocking could be filtering out the replies rather than preventing a reply.


Also, this forces the target of abuse to decide between publishing the message - giving the abuser a platform - versus not publishing it, suffering the abuse privately, and nobody knowing the person is being abusive (aka Missing Stair.)


It's worse than that, because you an block an unlimited amount of people, in advance. So you just identify a group of people that would likely respond negatively to your point, and make a post that can't be responded to by that group of people.


When this first happened to me I assumed it was a reddit error until I looked at the response code.


Same. It was extremely frustrating because I had written a long response with several references to support my argument and I was just unable to post it, getting a generic "Something went wrong" error message. I thought it was a temporary error and saved the comment but after trying several hours later getting the same thing I realized they had actually blocked me.


Maybe it’s worth creating a throwaway account and posting the message that way to make sure your point has been made.


I've honestly considering it but I'm afraid they will just say "lol you created a new account just to say that? pathetic" or something.


I think if you address the coward blockage from the get go you’ll keep the upper hand.


> The key mechanism that makes Radiopaper different from other social networks, and more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment.

My mind is blown at how simple and elegant this solution is!

Great work there.


It is simple and I do like it, but I also worry that the counterparty can simply not accept your comment and always have the last word.

So there is a reduced incentive to invest your time in writing a reply.


Once a conversation is published, you can no longer choose not to accept your counterparty's messages. However, if you choose not to respond to further messages in that conversation, your counterparty's additional messages will be considered "post-scripts," which do not cause a conversation to rise to the top of the Explore page. The effect is that, while no user can claim the last word from a counterparty, you can make it unlikely for a conversation to be seen by simply allowing the other user to have the last word.


I think it will hide discourse rather than trolls since, I strongly believe, people just wouldn't approve things they disagreed with or go against their beliefs. I don't see a value in a public conversation that allows one to silence another (well, besides spam/trolls of course, but "trolls" has mostly lost its meaning).


Reminds me of how "letters to the editor" have worked for ages. A good faith editor picks out the best responses both praising and criticizing.


I think it's extremely important to not assume good faith/lack of bias, with anything of importance. I would prefer the option to see all responses, with "accepted" responses, from the editor, being the default view.


I don't see why that's a problem. They always can start their own conversation about the same topic if I, the conversation starter, don't find their contribution to my conversation valuable.


Wouldn't that strengthen the echo-chamber effect? If the original poster supports position X and does not allow anyone who supports position Y in to the conversation, then a position Y person will eventually start another thread and not allow anyone from position X to reply. So now you have two threads: one for position X supporters and one for position Y supporters, with no cross-talk between the two.

Unless, of course, there's some kind of mechanism that gives threads with higher engagement more prominence? That might create an incentive to have more inclusive conversations. Although that might incentivise trolling too...

EDIT: Perhaps combine the above with a reputation system, where you can see how ban-happy an original poster is. Since people don't like to waste their effort writing a reply that just gets ignored, ban-happy posters would be penalised by lack of engagement. Then platform provided moderation could just become a kind of 'meta-moderation' - basically just banning people who try to game the system (e.g. posting threads saying 'please reply so I can get my ban percentage down').


This could just limit the types of conversations that flourish on the service.

If neither party is adult enough to participate, then perhaps twitter and facebook aren’t the right places to have controversial discourse.

I could however see it leading to echo chamber threads, or sham threads where one party is deliberately providing a weak counter-argument, a bit like a fox news interview to a right wing politician.


Good point. Perhaps they could implement a feature that indicates there has been a response submitted but not yet approved. Possibly it could include when it was sent, who the reply is from, whether the approver has seen it (and when they saw it). That would at least make it apparent when someone is withholding approvals for an unreasonable time.


The "seen" feature on whatsapp/facebook is the most infuriating feature that ever existed IMHO.

It's just so frustrating to have no replies when you know the message have been read.

HN is free of such shenanigans and it's been so far the best experience I've had on the internet.


Sometimes I grab my daughter's phone because she left it there, and the most recent imessage notification shows up as soon as it detects being picked up. Most commonly seen "most recent message" is "Bitch leavin me on read angry emoji..."


I would prefer transparency, with the ability to see all submitted replies to a comment, if I chose to, to help bring to light any bias/shenanigans.


The system could provide a higher friction way of accessing the unaccepted comment. This allows for audits (so as to compute the good-faithedness of the deciding counterparty), but still keeps trolls out of the limelight.


Exactly my thought - I reply here all the time and simply ignore responses half or more of the time.


Isn't this how comments work on Gawker/Kinja properties?

On Gawker/Kinja if you've been "followed" by a power user, your comment shows up right away. If not, your comment goes into "the greys", which are hard to see, until either the person you replied to replies or stars your comment.

I've spent a lot of time reading Jezebel and TheRoot over the years — they're a balm after experiencing the single-silo HN. The Gawker properties aren't what they used to be, but this commenting mechanism has its advantages. It truly defangs trolls. Jezebel and TheRoot could never operate without troll protections — there are so many disturbed characters hanging around trying the most vile stunts, you'd never manage to have a conversation proceed otherwise.

There's a significant flaw, though: the more that your interlocutor disagrees with your reply, the less likely it is that your comment will get approved. This doesn't apply to everyone on Gawker properties because there are lots of approved posters. I don't think the chained comment system would be that great unless it's supplemented by a way of approving/deapproving posters as well.


The key question you have to ask yourself: If I was a bad actor, how would I take advantage of this?

Sockpuppets.


Obviously having multiple accounts can aid in some trolling efforts, but I think most trolls aren't happy merely trolling themselves on threads that other people might later see or participate in.


There's a whole subgenre of "everyone clapped" fake posts that have littered a lot of the history of Reddit. They seem to have been written in exactly that "might see later" spirit.

It's probably trivial to gain a mere two accounts on this service to post a fake conversation of two sockpuppets attempting to "outwoke" each other. You can probably already see the arc of such a fake post in your mind.


There's a fair bit of "false flag wokeness" floating around already.


Exactly, possibly the main function of troll farms is to attack opposing voices, usually to drive them from the platform entirely by overwhelming them. Creating their own echo chamber doesn’t have the desired effect. This new service is like a public version of direct messaging on instagram/facebook/similar, which all use a similar blocked-until accepted approach.


I believe that a proportion of trolls (for want of a better word) are targeting anyone with an audience. They're not trying to convince that person, they're using them as a stepping stone to reach their audience. If they can get 1 person in 1000 on to the "d0 yOuR r3se4rch" youtube train... well eventually you end up with antivaxx. Or flat earth.


As I understand it, every message needs a counter-message. A sock puppet wouldn't be able to advance a thread unless the principal user engages with the sock puppet also. Right?


If I'm reading the design correctly (and I might not be), "counter-message" means "parent message". So puppet1 goes in with the reasonable response, and puppet2 replies to that with the unreasonable response, and we're off to the races.

(I emphasize I could have the wrong end of the stick about the design).


not clear to me either, but yeah if it's just about an initial response to OP, I see how that could go down


Had the exact same « owww that’s smart » instant reaction when reading that part. congratulations to the team.

Edit : since the founders are reading this, i had an idea once about what an anti-twitter would be like and came to this idea : no message under 1k characters. Do what you want with that idea, i give it to you.


Information is not correlated with word count. It would result in messages with low signal to noise ratio that you would have to skim through.


i don't think people will spend the time to inflate the number of words on every short message they want to post. They'll just be lazy a post elsewhere.


AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA


Then the rules would require that the text doesn't contain repetition (i.e. doesn't compress well), so people would mash random letters on the keyboard. Then the rules would require that the text contains a minimum percentage of dictionary words, so people would start copy-pasting from Wikipedia articles. Then the site would implement some sort of plagiarism detector, so people would start using GPT-3 to rewrite the articles.


why would people spend that much energy if they've got basically so very little thing to say ?


You need 957 more A's. Sorry.


It's definitely interesting, but it also comes with what seems like a significant tradeoff whose effects may be difficult to anticipate: initial posters are given a big advantage in terms of control over the conversation.

But maybe it won't be such a big deal in practice: in a way, it's kind of like enforcing a certain amount of politeness when conversing on someone else's turf; you've entered their 'house' and are expected to play by their rules while there.

On the other hand, when it comes to debate of any kind, there always has to be one party who gets priority over the other and can tailor the appearance of the outcome of the debate to a certain extent.

TBH, I'm mostly very curious what kind of behavioral dynamics would emerge around this—it's probably not possible to infer too much in the abstract. In any case, an interesting idea.


It will be an enormous deal in practice. Most people are not attracted to the idea of their conversations being shut off by people speaking in bad faith, but people interested in arguing in bad faith are inherently attracted to these designs.


But how do you avoid astroturfing? Seems like someone with two accounts (or a small cabal) could get anything they want published with very low friction.

Are you considering a strongly-bound 1-account-per-person model with verified accounts?


I believe the comment needs to be accepted/replied to by the person who wrote the parent comment.

It doesn't stop you from posting your own stuff at top level, just stops flames, I suppose?


Many people in this thread are assuming that the site can be used for "open-forum" discussions like most other social media sites, with whomever replying to whomever, when in fact it's more like a public 1-on-1 discussion board.


Though I imagine if the moderation doesn’t happen very quickly, there will often be a lot of very similar comments being submitted because they don’t see each other. And the moderator/OP will have to decide between rejecting the “duplicates” (which can feel problematic) or accepting them all (which leads to a lot redundant comments being published). That in turn can create peer pressure to not moderate too slowly. Not sure I’d like those dynamics.


I think this might be a feature? differentiated, insightful comments are the ones that yield the most interesting, differentiated replies.

If you get enough overhead from the platform from writing boilerplate comments, maybe you just stop?


Well, I guess it could end up creating a culture of differentiated commenting, but at Reddit/Twitter scale I’d be a bit skeptical.

One other worry is that those commentees who are willing to spend significant time moderating the comments they receive may not be exactly those who care about quality. To be honest, I wouldn’t want having to moderate the replies I get on HN. :)


Selecting, from critical comments, only the easiest to dunk on would seem to be likely, and is a tactic suggested by the site name (after all, it's a well-known technique used in screening callers to radio shows...)


Good point. I’d also expect speculation or accusation of suppressing certain replies to become a topic of discussion. Factions will accumulate in separate subthreads where they mutually approve their respective positions, creating mini filter bubbles.


This is actually supported by the first — completely unrelated! - conversation I read on the platform.

“Whenever I write anything publicly I risk being pulled into the maelstrom, which I call the epistemological woodchipper. It's a risk.”


The idea is promising, and the design is good, too.

Hoping this works out!

In the end, I see this as a feature for discussions / subreddits - not exactly a business. But who knows ¯\_(ツ)_/¯


I don't like when I post something and it doesn't immediately show up, like why comment at all

Granted I don't expect to write something bad but yeah, I just have this gut reaction I did something bad, like downvotes

Like a "shadow ban"


It’s a neat idea but I’d worry if it scales at all to anything beyond a hundred or so followers. At that point it’s offloading the spam filter onto the author, which will end up not publishing any replies at all simply due to the effort necessary.


The ability of the OP to accept or ignore replies is in addition to, not instead of, regular spam filtering techniques. Certainly we'll want to automatically block bots and other bad actors that violate our policies to reduce the burden on users.


I found this exchange super helpful in understanding what Radiopaper is: https://radiopaper.com/conversation/K5opCylqOizaZ0HQCB5Q/ots...

This feels like something that would be hard to get traction with, but hats off on trying to do something different and thinking through what social could be. Also, love the design and UX - great stuff!


Thanks for pointing to that. I thought this part in particular really helped clarify how this is supposed to work:

As we were building Radiopaper, we developed a concept we call "social skeuomorphism." ... Social skeuomorphism is the idea that a social network should be designed to resemble the best social events in the non-digital world. This is tricky, because internet communication differs from in-person communication in many dimensions, and it's not immediately obvious which of those dimensions are the important ones. In a few of these dimensions internet communication might even be superior to in-person communication.

...But parties have a number of built-in safeguards to prevent this phenomenon from becoming toxic. If A approaches B at a party and begins speaking, B is expected to acknowledge the approach, but is free to leave the conversation quickly if desired. If A then follows B, refusing to terminate the conversation, A is being rude, and publicly so. At parties this is often sufficient to ensure that no one has to engage in long, unwanted conversations, or at least not too often.

Maybe this could be a complement to other social sites like HN or Reddit. Sometimes you see two people start to go back and forth in a subthread. Maybe that could be a cue to "take it to Radiopaper".


This looks like a bloody brilliant idea thus far. I'm curious as to how you guys approach implementing some of the traditional social media experiences (likes, follows, retweets, etc). There are so many things to consider.. I was telling my BIL, who is a psychiatrist, that we as devs don't do enough to include other voices from other experts in the dev process.. It seems like many of the traditional social media functions could be more harmful then helpful, both individually and as an unhealthy group dynamic. Perhaps simple changes can make big impacts, such as approving people to follow you when they request it, and the option of turning off new follow requests altogether.

It's interesting I see this project, because I've long thought the problem with social media is that everyone can talk to everyone and anyone at any time. Like everyone is in a giant stadium with access to the PA to announce themselves... and everyone in the stadium can use it all at the same time. Real life doesn't work this way. While unfettered communication is nice, it is also overwhelming and positive and healthy communication needs filters and topics and such.

I'll definitely be watching this app and hopefully using it. There is a lot of opportunity here. I really enjoy the clean and simple interface. Personally, I think this is probably the best social media app and idea I've seen come through HN in a long time.

A 'contact me' button which quick copies a link would be a nice addition.


I would suggest that retweets should not be implemented. They’re the main vector for toxic discussion amplification, and they add nothing to a conversation. It’s a lazy way of participating in a mob.


I concur.


I hope I'm not missing something, but am I understanding it correctly: Messages between two people are publicly viewable, and then it's possible for other people to comment on them?

I'm having a hard time wrapping my head around a general use case then. It seems like a neat idea for discussions or debates between two people which have public value.. but most of the time when I'm writing directly to someone, why would I want to have it shared? Looking at other social networks, most of the time you want to share something with your social circle and generate a conversation which has more than two people or you message them directly. Am I missing something?


I have the same thoughts.

I think this is for people who like to have other people read what they wrote (as a sort of validation), but doesn't like having people who reply with no-effort content. So the userbase would be people who like having intelligent debates in the open, but in a selective way (not with people they deem "not fit to debate with")


A few obvious gaps to highlight that we're actively working on:

* The OAuth scopes for login with Twitter are overly broad. This appears to be a limitation of Firebase Auth's SDK, it only supports the Twitter OAuth 1.0 API, whereas Twitter only provides more fine-grained scopes if we use OAuth 2. We're looking into whether we can make changes to Firebase Auth to contribute upstream that would let us request email addresses without permission to view your timeline, followers, etc.

* There are a few quirks with our UX on mobile devices, and Comments are not visible on mobile. We've focused on the desktop experience for reading and writing at first, but a lot of our users do come to us on Mobile, so making this better is a high priority.

* We're missing a lot of the standard social features you might expect from an app like this: following, reactions, @-mentions, topics, search, etc. These and many others are on our roadmap, but as a bootstrapped team trying to maintain a high quality bar, we're moving on them pretty slowly.


> search, etc

Without search I can't see how anyone could make much use of it.


I've been browsing the site a little bit and I've got a question. Who is this or what is this for? I couldn't think of a personal use case for it so I'm asking out of curiosity.


Just wanted to comment and say that the design is a breath of fresh air. It feels in some ways similar to the startups of 10 years ago while still feeling modern and readable.

I feel like, by avoiding some of the modern UI/UX landing page trends, it feels more authentic when I land there. I like it.


Thank you — we wanted the design to communicate that Radiopaper is offering something quite different. "A breath of fresh air" is exactly what we hoped for!


I just signed up. It looks really nice.

I would request some sort of topic subdivision like subreddits or hastags or something similar.

It is hard to know what a conversation is about or even how to post about a topic of interest.

For example, it would be interesting to see a discussion about being on HN and the impact on the user base


Doesn’t this also allow conversation censorship?

- don’t like someone’s response - simple accept it and no one will ever see the potential counter argument?


* of course I meant to type don’t accept


This seems like an excellent mechanism you have here, and you can tell from the current batch of conversations that you've already achieved good participation. It feels a bit odd and voyeuristic reading it though, almost like I've walked into a high-class party where I don't belong. This may be entirely intentional, and admittedly the sophistication of the conversations is beyond any forum I've seen before. (This might say more about me than your project.) But I do wonder: is there any place for low-brow talk here? Or even quick back-and-forth banter? I just wonder if it's even possible, or if what I'm seeing is simply a result of early testing from your inner circle (who seem waaay classier than my friends, haha!). Even if it doesn't fill every role it seems like a success regardless. Good job!


Thanks for taking a look! We're excited about the mechanism, and about our early enthusiastic users.

You're right that the conversations going on now are mostly a bit high-brow or formal. This is mostly an accidental feature of the way we rolled out the site to people we thought would like it. Many of them are professional writers!

But absolutely, Radiopaper is also a good place for banter or conversations about whatever (keeping in mind our content policy, naturally: radiopaper.com/policies). Sports gossip, politics, whatever — go for it!


> more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment.

So this platform proposes that you must make your comment agreeable to other party, which has its benefits; civil discourse, pleasant rebuttals, less polarising discussion. However it gives the power of censorship to each party, giving a false view of discussion.

What really makes a system resistant to trolling is "reputation", this can be in the form of points or even some kind of verification system. The best platforms respect this, and requires you to earn the right to be heard through time and dedication to the community. Points are a way of verifying that the user has had respectful discussion in the past, as users should only earn points through respectful discussion.


Hi Evan! Wonderful idea, and a beautiful implementation.

I conceived this idea independently some years ago (just the core of it: that all replies must be approved by their immediate parent) and am delighted to see it realized!

For a long time, I've been bewildered at the lack of innovation in online discussion. It's time for something new! There are so many ideas we haven't tried — see here for a list of just a few I've been thinking about:

    https://news.ycombinator.com/item?id=28231904

    > Requiring each comment to be approved by the author of its immediate parent
    > (so you can disagree with each other, but you have to be kind)
Have you considered any other ideas for helping people engage in meaningful conversations?

I wish you the best of success! Let a thousand flowers bloom.


Hi Ping! Isn't this idea slightly the inverse of the idea you mentioned here?

That is, this system requires comments to be approved by their addressees.

Suppose you write something to me, and I have something to say in return.

In Radiopaper, your comment to me would only be published if I (by posting my reply) consent to having your comment appear.

In your system, my reply to you would only be published if you (as the original author) consent to having my comment appear.

Or have I misunderstood?

(I agree with you that it would be cool to see more experiments about how to make online discussions work better.)


Oh, I didn't realize it worked that way in RadioPaper. Does that mean all the leaf comments are always hidden? That would seem a bit strange. If you asked me something, and I agreed, that wouldn't necessarily warrant a further reply — but then no one would ever see my expression of agreement?


This is really interesting, one question that comes to mind is whether popular discussions will become stressful for the original two people as others create new threads off certain comments. It seems that you would either have to ignore those new threads or inevitably end up having to keep track of a potentially deeply branching, difficult to follow discussion - having to make the same points in parallel to multiple people.

Some way to fan in the conversation and allow multiple participants as points converge could solve this but it sounds like it would be rather complex to get right.


Thanks for this question. It's one we've given some thought to. You're absolutely right that the ability to make new threads allows for the possibility of heavily branched conversations, which could be tricky to follow in their entirety.

Our thinking about this was that we want to make it as easy as possible to follow a conversation between two people, however long and involved the conversation gets. So, while conversations can branch endlessly, it's always easy to follow what any two people are saying to each other.

In the longer term, we might also add some features around seeing an overall conversation branching structure, but it's not on the short-term roadmap.


> * The key mechanism that makes Radiopaper different from other social networks, and more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment.

. Does this mean if someone doesn’t agree with your comment they can ignore it entirely, and nobody else in the conversation will ever know the comment was made? If that’s the case then wouldn’t it be possible to manufacture a sense of agreement (in the thread) by dismissing all unfavorable comments? Edit: Also if enough people of the same ideological leaning (or an army of bots) did this and engaged only with themselves, it could give the illusion that the general sentiment of users of your service is aligned in some belief or system of ideals or w.e? Idk, maybe I’m just spewing nonsense.. I like the overall idea and I wish your team the best of luck.


I can't believe this would be described as a selling point. Negative replies are necessary for the usefulness of any community.

Unless the intent is just to maximally echo chamber the network, protect con artists from unfavorable replies and so on. Until someone takes a screenshot of your post, and posts their reply as a new root, and the point has been defeated...


It makes each thread owner a moderator, which is kind of nice. Just like forums and social networks, thread authors may gain a reputation for one sided echo chambers, or comprehensive and educational conversation. Radiopaper doesn't force you to censor other opinions, or encourage you to do so.


I hadn’t considered the reputation bit. I do think of that is a mechanism that would come into play then it would only be knowable by people that are frequently engaging on the platform. Do you think this would pose issues for new users (assuming some threshold is of active users is met and there is a ton of conversations happening)


Frankly, I'm so tired of the pervasive low-grade harassment and de facto exclusion of disenfranchised communities on HN, that I've been looking for an alternative for a long time. Innovation is desperately needed in this space.

I've experienced a similar commenting system to RadioPaper in use on the Gawker properties, and it's not bad. It is siloed. But just because there's a single silo at HN doesn't mean everybody sticks around in that silo.


> Frankly, I'm so tired of the pervasive low-grade harassment and de facto exclusion of disenfranchised communities on HN

You mean exclusion of moderate to right leaning opinions right? That's my experience. If not (or even if so), I've seen lots of discussion on HN where everyone thinks their pet viewpoint is getting unduly picked on. That's probably as good a sign of neutrality as one can find (not that I think HN reader moderation is neutral)

Edit: maybe I'm misunderstanding the replies but my point is that many people seems to feel like the overall forum sentiment is against them. I'm not saying HN has a demonstrable left-wing bias. People seem to be trying to refute that for some reason


I’ve actually seen the opposite ime. Anecdotal, but I’ve told my wife at least 3-4 times about how shocked I’ve been that conservative talking points have been received well on this platform. The wuhan lab leak theory, commentary on undue censorship, predatory DIB/HR practices, moving out of costal areas to seek more sane people groups, and so on. I’m often surprised that there are centrists and right leaning people on this platform considering that the current media narrative would insist that we are all some degree of leftist.


> wuhan lab leak theory

There are apolitical reasons for this theory. I'm not conservative and found it to be in the realm of plausibility. For what it's worth, I've entertained it before it made headlines.


Would you consider yourself left-leaning by any chance? I wonder if its a perspective thing, because from what I've seen discussions seem pretty balanced towards both sides. I saw the wuhan lab theory post that I think you're talking about. But I've also seen a ton of posts and comments recently calling for increased land/property taxes


> wuhan lab theory... calling for increased land/property taxes

I feel like these are probably on two different levels, but given that you called it a theory and not a conspiracy, I think you'll also probably disagree.


I haven't got much opinion or knowledge on the matter, but the main discussion [1] about the Wuhan Lab here was pretty civil. It wasn't overly conspirational and there were a good few people in opposition.

You seem to imply it is some crazy far right conspiracy, but that wasn't the impression I got from skimming the Wikipedia article [2]. There's a good few serious people who consider it possible and while I'm sure some distorted version of it is weaponized by politicians, I don't see why that's reason to discard it.

[1] (https://news.ycombinator.com/item?id=29901824)

[2] (https://en.wikipedia.org/wiki/COVID-19_lab_leak_theory)


If I'm thinking about the same post that the commenter was thinking of [1][2], the article was not saying the theory as correct, but was discussing issues with the discourse around the theory.

Edit: I posted the wrong vanityfair link, fixed. Also regarding my terminology, "conspiracy" is a bit of a loaded term, and given the information in the vanityfair article I decided to keep my terminology more neutral

[1]: https://www.vanityfair.com/news/2022/03/the-virus-hunting-no...

[2]: https://news.ycombinator.com/item?id=30870435


No friend, on the contrary. I’d describe myself as a theonomic libertarian. A religious extremist by the standards of most people.


That's an interesting combination for sure, I'll have to look more into it!


> The wuhan lab leak theory, commentary on undue censorship, predatory DIB/HR practices, moving out of costal areas to seek more sane people groups, and so on.

Do you feel that an intelligent, principled person cannot make a valid argument opposing you and your wife for any of these phenomena?


Well, I would like it if I could participate in a forum where "women are genetically predisposed to be less ambitious than men" is not a constant theme of debate.

Having that debate drives away women. I don't know if that's the kind of "right-leaning opinion" you're talking about, but if it is, I would rather that we were divided into two silos: one silo where people talk about how women are not ambitious, and one silo with women.


This is rarely possible on the internet but the ideal forum for me would be one where people could discuss the details of genetic predispositions with the understanding that it generally shouldn't change how people should behave.

Like, say one day that a very well researched finding comes out and proves that men are predisposed to be more violent, in reference to higher incarceration rates. Should it change how you treat other people, once it has been reasonably proven as fact? I think not.

The choice that someone makes to discriminate against a group based on statistical realities does not depend in the slightest on whether those facts are true. People easily pretend they are true if they are not, and if they are true it doesn't justify discrimination in the end either way!

Unfortunately these arguments seem to often end up just being in service of a goal like justifying discrimination, and when they aren't, certain others will show up to accuse the person of that even if not true. Sigh.


I think there are plenty of forums where vulnerable people are shielded from uncomfortable ideas. And there are plenty where open exploration of ideas happens. The internet is decentralized enough that everyone can be satisfied already. I don't see what needs fixing.

I wonder, are there predominantly female areas of society where members often worry about how their rules drive away men? Are nurses afraid to talk about men's propensity to not be caring in case it scares them off?


In the concrete context of the post you're replying to, you seem to be saying that women are 'vulnerable people'? I'd suggest picking a definition of 'vulnerable people' that doesn't mean pushing away half the world's population.

(I'm also a little bemused at your implication of what an 'uncomfortable idea' is, as well as the implication that hackernews is the right place for this, as well as the assertion that you can fragment social networks to satisfy people while ignoring the effects of critical mass - but there's too much to unpack)

Your second paragraph is irrelevant in this thread (which discusses HN specifically).


Uncomfortable ideas, for vulnerable people, I guess would be the kinds of ideas that make them vulnerable in the first place. E.g., that same-sex relationships are against God or against the natural order of things, or that women should naturally be subservient to men, or only have themselves to blame for sexual violence. I've heard that there are forums on the Internet where such ideas are explored freely.

Edit: A forum that bans discussions like that, will probably be seen as left-leaning by some.


That's just obscure political correctness. I'd say it's far more often people who lack emotional maturity or self confidence and have any of all sorts of common insecurities that they don't want to be reminded of. Imagine being called an idiot when you actually have a low IQ? It hurts and it's far more common than being gay in Pakistan or whatever.


The person I was replying to seemed to want protected forums where women could be shielded from uncomfortable ideas. I was just saying that we already have those, as well as not those. Everyone's satisfied.

As for network effects, small forums still exist and serve people fine. You don't actually have to be on Twitter or HN to interact with people that you prefer.


A lot of the most provocative conversations I've read have been in siloed forums, where people are not constantly getting shouted down and interrupted and can actually string together coherent lines of thought.

I would like it very much if Radiopaper's approach, or some variant of it, made it possible to actually have a more enriching exploration of ideas.


> (not that I think HN reader moderation is neutral)

it's not neutral, but it doesn't lean "left" or "right" per se. What I see is a preference for well-argued or immediately-obvious points, often with citations, that engage fully with whatever they are a response to. Dismissals, emotive arguments, citation-free when making bold claims ... these do not fare well.


Perhaps there's just a big overlap between anti-science opinion and right-leaning opinions, these days, and a lot of readers on HN don't care much for the former?

Strictly speaking though, I think pro-science or anti-science is independent of left vs right.

Edit: I think I've only used the downvote feature twice on HN, and in both times it was for promotion of a ridiculous Trump-related conspiracy theory. Consider me left-wing biased, which I certainly am these days.


> pro-science or anti-science

This is an absurd characterization that is only used by people who don't understand the difference between science and religion and think that calling someone "anti-science" is a way to avoid debate about their religious views. It's as embarrassing as it is offensive.


They didn't say anything about religion - there are many non-religious anti-science views on the right, like climate denial and wild conspiracy theories. Plus, religious views that guide public policy and contradict science are fair to criticize.


I might be misreading the comment you're responding to, but I think it was describing science as a religion.

The dichotomy of "pro-science" and "anti-science" elevates science to an almost religious stature. Science is fallible, it's very hard to do and just as hard to interpret. It's also not the right tool for a lot of issues.

Increasingly people have been adopting being pro-science as a part of their identity as if science held some ultimate, unquestionable truth. To me it is akin to faith, it's an appeal to a higher authority that is used to shut down debate and paint some opinions as moral and others as immoral.


Yes exactly, thank you for putting it more clearly than I did


I'd be happy to debate science and religion, but I already did recently, and it would be off topic here. https://news.ycombinator.com/item?id=30826921

I suppose pro- or anti-religion isn't completely orthogonal to left vs right politics. Support for a dominant religion is more likely to be found on the right, as a means of social control.

A forum could be seen as left-leaning simply by having a policy that all contributors are equal and should be treated with respect. Desire for equality (of wealth and power) is pretty much the defining left-wing characteristic.


I wonder how often you have made comments that others would consider "low grade harassment" and exclusionary without realizing it, and thus would have those comments removed by someone, somewhere, if they had their way. It's a lot more frequent than you might realize.

One thing i fail to understand about this viewpoint is why the relationship between the removal of replies and comments and "low-grade harassment and de facto exclusion" isn't more prominent. Someone might say something one doesn't like, but the one has very little real power over the someone just with a reply.

But if they have the power to delete what they say, it's much easier to reach the level of harassment and exclusion. There isn't a much stronger tool for harassment and exclusion than moderation tools used improperly.


> I wonder how often you have made comments that others would consider "low grade harassment" and exclusionary

I'm quite certain that I have done so on this very thread. I'm seeking an environment (a private forum) where certain lines of debate are less ubiquitous (because I believe those lines of debate drive away people I would like to engage with). Some people are surely interpreting that as exclusionary and discriminatory, and perhaps some see it as low-grade-harassment trying to get them to leave.

There's a important distinction to be made between immutable characteristics and identity tied to ideas, but that doesn't mean that they aren't experiencing those feelings.

> There isn't a much stronger tool for harassment and exclusion than moderation tools used improperly.

A forum's promise of open exploration of ideas is worthless for those people who have left.

What I like about the RadioPaper approach is that it enhances freedom of association. Perhaps in practice that may actually lead to a more enriching exchange of ideas than a "free-speech" shouting contest.

(I still think that the comment-approval mechanism is best used to audition new contributors for unlimited participation, though.)


> “Frankly, I'm so tired of the pervasive low-grade harassment and de facto exclusion of disenfranchised communities on HN…”

Would you care to elaborate? I am genuinely interested, in part because of curiosity and also because I haven’t experienced this personally.


I think Yishan (ex-CEO of Reddit) explains it well: https://threadreaderapp.com/thread/1514938507407421440.html

Everyone feels like they are targeted and disenfranchised, as a side effect of the nature of social media.

  All my left-wing woke friends are CONVINCED that the social media platforms uphold the white supremacist misogynistic patriarchy, and they have plenty of screenshots and evidence ... 

  All my alt/center-right/libertarian friends are CONVINCED the social media platforms uphold the woke BLM/Marxist/LGBTQ agenda and they ALSO have plenty of screenshots and evidence of times when...


Yishan is bothsides-ing. It's a cop-out.

I do not believe or claim that dang is personally biased against "my side". I think he tries incredibly hard and is rigorously driven to avoid bias because of the grand social experiment he's shepherding. It's our privilege to participate in that experiment.

What I claim is that the policy of "assume good faith" has an inevitable, unavoidable side effect of driving away outgroups. On reflection it wouldn't be just left-affiliated outgroups either — the mechanism is that leniency in moderation means that low-grade hostility is tolerated — BUT that hostility could be directed against the outgroups of any forum, and who the outgroups are is variable and forum-specific.


The fact that both sides make similar claims does not necessarily means both of them are wrong. One can be just mistaken - or lying, and another telling the truth. An aggressor frequently claims the victim started it - but that doesn't mean aggressors and victims do not exist.

You can see that the people that control moderation on most social networks are political left, and aren't shy about discussing it. You can see that the political support from the social media workers leans to the left (I think over 99% of all political donations from twitter goes to the left). You see that these people routinely proclaim support for leftist political causes in their personal social media. Do you think there's a chance this influences their decisions?

Now, we've been told a lot about implicit biases and how people that think they are even handed may be still prejudiced even if they don't realize it, and need to take deliberate actions to ensure this bias does not influence their work. Do you think this also relates to people controlling social media, or they are unique caste of saints untainted by biases the rest of humanity suffers from? Or, more likely, they don't think they need to any take such actions, because they live in 99% agreement bubble? Do you think the existence of 99% agreement bubbles hinders their ability to evaluate their biases and enact measures to reduce their impact on their work?


See for example:

https://melissamcewen.medium.com/a-guide-to-hacker-news-for-...

That author advocates that HN stay how it is — or rather, she advocates changing it from within. But I don't think it's possible — I believe that the "assume good faith" proposition leads to toleration of unlimited low-grade hostility that does not rise above the threshold of flagging. The result is that outgroups flee the platform. Have you not noticed the gender imbalance around here?

From what I can see it's pick your poison:

* "Assume good faith" — outgroups don't participate. Exclusion may not be the intent, but it is the de facto result.

* Siloed: outgroups can and will meaningfully participate, but debate is more constrained.


Meh, low-grade harassment to you might be completely acceptable conversation to other people, and that is the crux of the issue. Voting is used to identify bad-faith responses. If certain people can't handle responses which are in good faith, then I am personally glad that they are not participating. Those are not the kind of people that make good discussions anyway.


Find me a comment that’s remotely critical of anything and I’ll find you an “outgroup” who believes it’s low-grade hostility.


I wish the article gave more examples than a single 2-line comment (that was already flagged and called out for being "stupid" by the moderator anyways)


Can't say I've noticed the gender of any user here?


You won't find genitalia on HN. You will find pervasive psychological profiles that correlate well with gender, if you just pay attention.


I was interested but not particularly moved by that example.


Yeah, I'm not surprised. I mean, the ridiculous gender imbalance here doesn't bother a lot of those who have stayed. It's a vicious cycle — the people who it bothers leave, and since those are disproportionately women, the imbalance gets worse.


I think it looks more like people interested in gender and identity based activism leave, and people interested in signal-to-noise ratio stay. I literally have no idea about gender of any person but a handful of "celebrities" on HN - and it never occurred to me to account for it in reading or writing comments (excluding, of course, ones directly discussing gender issues of a specific person, which are thankfully rare). But of course, I am aware of demographics of the tech community, and there's a place to discuss it. It's just it doesn't have to be the only - or the primary - thing to discuss on HN. For me. I guess for people that think otherwise there might be other venues they may prefer, and it's fine too. There's absolutely no problem in existence of the multiple forums with diverse set of focal topics. One doesn't even have to "leave" HN to be able to discuss gender issues in other places.


The point is that regardless of reason, women are by and large not well represented on HN. This is easily discovered by looking through the profiles of the top 100 posters and following the breadcrumbs. I recall that a few years ago, DoreenMichele was the only identifiable female.

> I think it looks more like people interested in gender and identity based activism leave, and people interested in signal-to-noise ratio stay.

That's not true, as there are plenty of people discussing male-focused gender issues and male identity based activism, in addition to those discussing women, transgender, and universal gender issues.


"Regardless of reason" makes a poor point.

> This is easily discovered by looking through the profiles of the top 100 posters and following the breadcrumbs.

Why would I want to do that? I mean, unless I am specifically interested either in gender activism (I am not), or in dating a top-100 HN poster (I am happily married) - why would I bother to research what kind of bits each of top-100 HN posters carry between their legs?

> That's not true, as there are plenty of people discussing male-focused gender issues

I would like to see some data supporting the assumption that male-focused gender issues and male-focused activism are discussed significantly more frequently on HN than female-focused gender issues and female-focused activism. This does not match my anecdotal impression - but of course I could be wrong. Could I see the source of your claims?


> Why would I want to do that?

I only mention it to support the assertion that the gender imbalance exists. You wouldn't do it ordinarily.

> I would like to see some data supporting the assumption that male-focused gender issues and male-focused activism are discussed significantly more frequently on HN than female-focused gender issues and female-focused activism.

I didn't make such an assertion, as I was only rebutting the idea that "people interested in gender and identity based activism leave" per se. There are plenty of such discussions — a 700-comment thread on limb lengthening surgery just went by within the last day. The thread is filled with testimonials as to the discrimination faced by short men and the harms caused, including stories of heinous, callous bigotry.

(To which I say: yes let's please work together to be better to each other. Let's start from the assumption that it is human nature to be prejudiced and we all need to work to overcome our biases, rather than dividing ourselves into "x-ist"/"not-x-ist".)

I actually suspect that there may in fact be more male-focused gender identity discussion on HN than female-focused, but I haven't done the research. The misperception about that arises because men may not classify their own identity discussions as identity discussions.


FWIW I agree with you. That this and your other post are both light gray kinda proves your point. Neither post ‘contribute nothing to the discussion’


Well, the HN guidelines for downvoting consider disagreement a legitimate criteria. I take the downvotes as connoting mostly disagreement rather than a judgment that I was commenting in bad faith or posting something substance-free.

The disagreement may be unsurprising, but I'd rather be persuasive. I'm not chasing "heretic" cred.


I'm basically an expert in retail automobiles being heavily involved in car racing, selling cars for Audi Porsche for over a decade, and have a mechanical engineering degree. The amount of misinformation I see on Jalopnik articles, and the comments, is staggering. I attempted to post civil replies and corrections countless times. I was never 'ungrayed'. Not only did I eventually stop using the site, I can only imagine the thousands of people that missed out on useful knowledge.


I have found that I'm always most disappointed in the comments in areas that I know something about, and I rarely comment on such article (luckily it doesn't happen that often). To be fair though I don't see HN as some kind of authoritative information source, it's just a place for somewhat like minded people to come and shoot the shit about stuff. So people have some dumb opinions about cars, they just want to talk about it, they're not writing legislation or something. It's like overhearing a conversation in a coffee shop, you can walk over and tell the people they're wrong, but my experience is that rarely leads to any kind of comeaderie. Better to just take part in the conversation and not worry if people are wrong (not that I'm good at following my own advice)


That's a totally rational point about general coffee shop etiquette. But for an automotive website to have a comments section where only approved people can comment, and someone clearly posting well thought out details can't even get approved to post... Has a problem.


Over the years I've left a couple hundred comments on Jezebel and TheRoot. I'd say a third or so have gotten ungreyed — almost exclusively replies on existing threads that have been either approved or replied to by the author of the parent comment, in the same mechanism as RadioPaper.

Gawker staff used to participate in the comments, but those days are long gone. That means that a top-level comment by anybody who isn't already pre-approved will almost certainly remain in the greys.

I miss the way it was, but the Gawker properties have gone through multiple wringers.

So I agree that the Gawker properties have serious problems, but I think that the commenting mechanism actually yields some pretty great results. It would be fab if Radiopaper could iterate and improve on that model!


This sounds like Gell-Mann amnesia for forums.


> pervasive low-grade harassment

Stupid people consider comments that disagree with anything they say “low-grade harassment”. This is visible all of the time when you look at people repeating falsehoods even about mundane things (not even hot button political issues). They become defensive about “being called out”.

We absolutely should not be encouraging any “conversation” systems that let people just hide any comment that they don’t pre-approve.


I don't know how to phrase this without seeming rude, but this idea at its very core seems terrible and short lived. The only way I can possibly see this ending is with an aggressive echo chamber culture that drives off everyone else.

I've seen forum cliques and mass blocking kill off many sites back in the day, and this design would empower this attitude in such a way that it could never be changed.


I completely agree


What about a system where the unapproved comments are published, but mildly hidden/deprioritized?

The idea being that if viewers can see what the OP doesn't approve of, they can make a judgment as to whether that OP is making good-faith arguments and accepting reasonable replies or whether they are just trying to build an echo chamber


This sounds like an interesting approach


I like that a lot. Hoping OP answers


Sock-puppet bots are definitely a risk for us, as ordinary bots are for other sites. Our core mechanism is complemented by, and does not replace, the standard anti-spam, anti-bot techniques.

Additionally, other users on Radiopaper will understand the mechanism. So, if they see a controversial post with 100% fawning replies, they're going to understand what's happening, and discount the credibility of those replies accordingly.


An interesting feature would be to allow people to "disagree but show" for comments they disagreed with but considered at least thoughtful/civil/in good faith, and then combine that with user stats:

When other comment on my posts:

    - % rejected
    - % disagreed but shown
    - % accepted
When I comment:

    - % rejected
    - % disagreed but shown
    - % accepted
This would incentivize openness and good faith conversation, and make abusers on either front (trolling or over-policing) instantly visible.


An earlier iteration of the site actually had a set of reactions that included a 'disagree' option, with something like what you're proposing in mind. We ended up removing reactions from the product temporarily for the sake of simplicity, but hope to add them back in shortly.

The full set was -Agree -Disagree -Interesting -Beautiful.

We still use them internally. You can see what they look like here: https://twitter.com/DavidSchaengold/status/15202075451048468...


It would be nice if a moderation process allowed people to distinguish between comments which are "correct, but expressed poorly" and those which are deemed "incorrect, but expressed politely".

The former comments would present an opportunity for someone to try to reword them (and possibly steelman them), while the latter would be a signal to the commenter that people disagreed but appreciated their good faith effort to engage constructively.


Thanks for taking the time to explain


The goal here seems to be twofold. On the one hand, to put the onus on the one receiving messages from the troll or heckler to either stay silent and not give them a voice, or respond and allow the world to see the negative comment. On the other hand, with respect to the troll responding to a comment, many trolls don't care if they're seen as such.

And trolls aside, it seems that boring messages are less likely to receive a reply than ones that are more inflammatory or generally compelling. Is that true?


An approach where comments have to be seconded by someone else has been tried before. It doesn't work unless either 1) the community is small, or 2) you have some means of verifying the identity, or at least that it's a unique person, of the seconder.

It's the usual spam problem. If people can generate identities cheaply, "who" based blocking will not work.


What do you mean "drive thru fast food"? Why would people want to do that, and not have a table and chairs, especially when the price is the same? Plus, what's to prevent someone from ordering and just driving away without paying? It would be almost impossible to catch them, after all. Why would anyone wait in line, with their car, burning gas and inhaling the fumes from the next car, right before ordering a meal?

It's insanity, and will obviously never work.


This analogy doesn't work at all


Reminds me of the "safe space" from South Park.


> > * The key mechanism that makes Radiopaper different from other social networks, and more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment.

>Edit: Also if enough people of the same ideological leaning (or an army of bots) did this and engaged only with themselves, it could give the illusion that the general sentiment of users of your service is aligned in some belief or system of ideals or w.e? Idk, maybe I’m just spewing nonsense.. I like the overall idea and I wish your team the best of luck.

It works like that in almos all cases where there is user voting system that can result in comment being invisible for other users. As one side gains momentum, it is for them easier and easier to block distending voices. After some time discussions start to look one sided and people that disagree just stop trying to engage. That feedback loop in the end turn such forum into an echo chamber(loaded term, but no side is free of it). Even more if moderators are on their side.

But there are also counter examples. Cases where models/theories advocated by comments are so disjointed from reality, facts that there is nothing you can even disagree on.

>>“The brain mistakes familiarity for truth,” van der Linden says. https://www.nationalgeographic.com/science/article/why-peopl... (this is one example, there are more that show this kind of problems for other ways of thinking)

There was a study(https://sci-hub.se/10.1016/j.intell.2018.03.009, https://www.nature.com/articles/s44159-021-00006-y) repeating pseudoscience, hoaxes even if done to debunk them, caused people to see them as more likely because they become more familiar with them(and forget why/where they learned about them). So when someone repeats them to propagate them, harm cab be even bigger.

So there is argument for blocking harmful (in the end all of them end this way) misinformation to stop its effects on readers. But this approach is ripe for abuse, there is little stopping one from lumping into such category, theories/models one simply don't like/disagree with.

In the end, this is one of the reasons why assessing if something is true is hard, even harder if you add Internet into the mix.

In the end we are not perfect logical machines with same memory, we have flaws in our brains that can be used agnaist it, to persuade it into believing falsehoods.


> It's possible to interact with Radiopaper entirely by email, and never log-in interactively.

You guys are gods.


This looks like an interesting idea. A twist on mailing lists with a web facing interface. One thing I couldn't figure out, are all threads one-on-one? Or is it possible to have threads with many people in them?

A small technical nitpick, it seems cache headers are not set correctly for images and such right now. When you scroll up and down the homepage, the same images get re-requested again and again. Out of curiosity, why are the top items unrendered when scrolling down?


Yes, all the threads are one-on-one. We've toyed with the idea of group discussions, but haven't landed on what the path forward is there just yet.

Thanks for highlighting the image caching issue, we'll look into it! As for the scrolling behavior, we render a sliding-window of rows to keep DOM size down, but the UX does need some work.


Re caching:

I had a look into it, and it seems we set the following headers when retrieving user images:

> cache-control: public, max-age=3600

So I believe your browser _should_ be caching these.


It looks like you can add to an existing conversation, there is a link on the right-side.


This is also correct, you can comment on any message in an existing conversation. The same publication rules apply here – this spins off a new one-on-one conversation with the message's author.


Have you talked to people who have handled trolling and abuse at scale before? I ask because there seem to be some huge holes in your model.

1) Message sending is already a victory condition

If I understand correctly, when someone comments, it's always forwarded to the OP. Many harassers will use that to send vile things to the OP. If accounts can be generated without limit, they'll flood the OP's inbox.

2) Since replies are firstly private, this means others will not have the chance to downvote

This deprives the system operator of some potentially valuable signals. Of course they can be brigaded too.

3) Opt-in once and receive a lifetime of abuse

It seems like your system allows for individuals to opt-in to displaying replies, but no way for them to opt out. Many trolls will consider it a game to deceive the OP with an apparently sincere reply and then they'll have an unobstructed channel.

I'm not even a specialist in these matters. Seriously, there are lots of people who are experts in troll fighting at this time, I strongly suggest you talk to them.


This looks very cool; I've signed up. You can tell from both the visual and mechanism design that the founders really care about their product.

Also, if the Radiopaper team is reading this... You've probably gotten this from other people, but it is hard to know which conversations to enter -- it'd be helpful if they were organized with a title or topic or a tag.


Thanks for signing up! And thanks for the feedback. It's clear that discoverability is an issue with the site in its current form. We hope to add some features to improve that experience very soon.


This vaguely reminds me of an old social network/forum from the time when Path was around iirc.

It was a public question/answer or discussion site with, again iirc, public full ID profiles promoting so called known (famous?) folks/profiles. I call such people the “in folks”. In other words “influencers”. That’s what it seemed. First look gave an impression that those people are highlighted, not the content. Checked it on mobile - a quick check. It also reminds me of reco.com with focus on people like Bill Gates et al as id people who read books just for the pleasure of reading would take recommendations of Bill Gates (okay, this sounds too generalised!).

Organic conversation is the last thing that happens on such sites. It might be different and best wishes to people behind it.

I can’t remember the name. “Peach” comes to mind for some reason but I checked and Peach is not that. Name was something “get———“.com I think.


This looks like a nice idea, though I can't really wrap my head around the UI. How do I tell people how to discover me? I signed up and my username was my email, I changed it to my name but the URL didn't change, and now I don't know if my email is set to something unusable (or, if not, where to find it).


We don't have usernames!

You can log in with OAuth, or by Email.

You can change your _display name_ on your profile page, and that's what appears publicly on the site (your email doesn't appear publicly, even if you log in by address!).

Today to get a custom profile URL like radiopaper.com/Dave, you just need to ask and we'll create an alias for you. There isn't a self-serve alias mechanism available today, but we're thinking about how to provide one. Any profile can be linked by the URL radiopaper.com/uid, for instnace my profile can be reached at https://radiopaper.com/user/AwQLwnOgQFdyDfNoYfNCSSsVdx43, "Dave" is just an alias.

Clearly having separate concepts of login method vs display name vs profile URL is all a bit confusing, this is helpful feedback!


Ahh okay, thanks! Could I have the username "Stavros" then please? I am currently https://radiopaper.com/user/U2goL6aTrtYs38S1l0kMkJuMwMm1

Also, having a section in the profile where I could change my login email address would help with the UX.


Done - https://radiopaper.com/Stavros

Being able to change your email address is a known gap, sorry about that. For now if you want to use a different email you'll need to make a second account. We plan to make account merging possible in the future.

A short-sighted technical decision on my part: we use the Firebase Auth UIDs as our internal user IDs, but Firebase Auth does not allow you to have multiple email auth-providers on a single account. So we need to add a layer of indirection in our data model.


Thank you! The inability to change the email address is OK, but it should still appear somewhere so I know what it is and that it's not my username. Maybe a note of "you can't change this yet, sorry!" would be nice too.


I enjoy the design; clean and simple! Somehow, the initials in red reminds me of liturgical texts

Also, I love the approach with close integration with mails - I think it is strange that the mail protocol isn't more widely used - having my own copies of the conversations, easily searchable and exportable, is a huge advantage!


Thank you! We felt that it was really important to get the design right, and I'm glad that you found it clean and simple. You're absolutely right that we were inspired by the graphic design of old breviaries and liturgical books.

Email is in some ways like a whole hidden social internet. A lot of people use email who have no interest in Twitter, Facebook, etc. And many of those people are really interesting. Radiopaper provides a way for them to have public conversations online without having to learn a new technology, and without having to worry about what might end up associated with their online presence in the absence of active management.


What if I don't like my initials?

I share them with a political party that I disagree with, and it always irks when I am called by them.

Maybe I should create an account and check if there's a setting before I whine about it... but this is still the internet!


> The key mechanism that makes Radiopaper different from other social networks, and more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment.

Isn't this the same thing as protecting your tweets and restricting who can tag you and reply to your tweets?


That’s a very interesting question, and not one I have thought about before. Thank you for asking it, it has added a new dimension to my thinking and I will tell my grandchildren about your incisive inquiry. No. Do you see how you described a system based on identity, whereas this forces you to review individual comments before they’re published? There’s no way on Twitter to request the ability to reply.

Edited to add the kind of fake boilerplate that seems to gush from every message on there in order to encourage acceptance.


Thank you, cormacrelf. Your gratuitous, utterly unwarranted nastiness here is an excellent example of why communities like Radiopaper sound particularly appealing right now.

That kind of overtly obnoxious response is generally discouraged on HN.


My comment was a direct illustration of the difference between Twitter’s reply-limiting system and Radiopaper’s, and one of its primary effects, which is to dress up discussion in niceties out of fear of not getting accepted. The first sentence is a direct quote from one of the threads on Radiopaper, one which seemed to be echoed in many others in their own way. It was a very light parody. I didn’t direct vitriol at anybody. And I raised a point that nobody else had. If you thought I was being sardonic and nasty towards the question about Twitter, all that went way over your head.

I don’t think the over-friendliness would last as a phenomenon because frankly I got quickly bored reading replies like that, so I would expect the community to get over it and switch to trying to write good posts instead of performatively respectful ones.


Rather than having to field and read through all of the abuse myself so that others don’t see it.

How about I just not accept replies anyone but my whitelist, which I make based on their behaviour in general.


Your edit message cancels out your edits in a pretty funny way here.


> The key mechanism that makes Radiopaper different from other social networks, and more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment.

Advocate here. The Devil is my boss. Don't hate, please. But I'd like to mention...

So an idea isn't valid until the person being "challenged" signs off? That might mitigate trolling but there's also (likely) collateral damage. It could also accelerate (?) echo chambers.

What, off the top of my head, makes sense is sonething simalar to Google's image-based CAPTCHA. That is, does X idea make sense? Is it factual and correct?

Leaving the "challengee" in control feels subject to bias and "reverse trolling".

I'm certainly pro an alternative to the usual social media suspects, but people are going to be people. For better and for worse.


I just posted this here [1].

Is moderation reviewable? One issue with other platforms is you often cannot see what they have removed. On reddit they show you your removed comments as if they're not removed when you're logged in. This can lead to parties where everyone thinks their voice is heard all the time when in reality it's sometimes filtered.

The biggest trolls in this scenario are the mods. Secret removals can contribute to mods manipulating the conversation through what Peter Pomerantsev calls "consensus cracking" in his book "This Is Not Propaganda".

-Author of Reveddit

[1] https://radiopaper.com/conversation/46ERO4T8dF9hmx156gmm/0oV...


Radiopaper is both empowering and requiring every user to be a content moderator. Let's say Alice and Bob sign up and start holding interesting conversations. Then Nasty McNasterson starts sending hate mail and death wishes to Alice and Bob. Alice and Bob can stop Nasty's messages from showing publicly, but first they have to see the never-ending stream of abuse. What can they do to stop this? Can they block Nasty from sending them messages? If enough people block Nasty, can Nasty be kicked off the platform? On the other hand, if Nasty and an army of sock puppets (or a brigade of chat buddies) block Alice and Bob, will they get kicked off instead?


Adding a feature to block specific accounts is on our short-term roadmap.

We'll also endeavor to block accounts from the platform that violate our policies (https://radiopaper.com/policies).


This is very neat! The premise (1-1 convos, messages published only on reply) is pretty unique.

It's not at all clear to me how that premise is going to lead to more civil conversations, but I am 100% behind people trying out different online conversation mechanics and seeing what falls out of them.

Stack overflow, 4chan, twitter, every phpBB ever... the communities they developed were all heavily influenced by the mechanics of interacting with them. I strongly believe that there's plenty of solution space left to explore with the problem of "how do we design interaction mechanics to produce the community we want." I applaud anyone exploring that solution space!


"Email as ID" is a great idea. But will it survive the pressure to monetize the service? If you ever want to sell the business, buyers will want more reliable IDs.

One big thing I've been wanting from such a service is subscription to users I find interesting. Someone has posted a great insight? I add the user to my list and my daily email now includes whatever that user has posted yesterday. The email has links to unsubscribe from users. As a side effect, this breaks echo chambers: unusually motivated users won't be able to spam your subscriptions, and won't be able to ban you - the only ban is when others unsubscribe from you.


This looks pretty cool! It feels like a new take on the idea of mailing lists. One suggestion that came to my mind, is to have the ability for viewers to subscribe to conversations. It could be either through Email notifications or RSS.


Fantastic suggestions, thanks – we have a notifications system on our road map but are still formulating the exact mechanics. We would also love to support RSS! One of many things that we will try to get to soon.


+1 for RSS feeds!

Also, is there a blog or some way to follow¹ development and know when new features roll out?

¹ Preferably via RSS!!


If we're making technology requests, is it too late to ask that the site work without JavaScript? I guess you could say "It works without JavaScript because you can interact via email", which is fair, but if you're adding support for RSS then it would be nice to be able to access the site via an API or with curl.


As others have argued, this has still many opportunities for abuse. I believe that the major problem all online social media faces isn't individual comments - it's that it's incredibly hard to have persistent consequences for bad actors.

Most of them, you just sockpuppet around. Yes, you could enforce "real" identities, and that comes with a whole host of different problems.

I love you're experimenting in the space - we desperately need improvement. I just think you still have a few experiments ahead of you (and I hope some of them center on the idea of community instead of individual interactions)


I applaud any attempt to increase the diversity of platforms for people to share ideas.

There will never be a single platform that will satisfy everybody, because people are diverse and have different definitions of what it means to troll.


> is that messages are not published until the counterparty replies or accepts your comment

Ok, this is a really smart idea.

Can you just please add a link to your about page on your home page? without this posting, I never would have found it.


Sorry it's a little hard to find – if you hover your cursor at the bottom of any page there'll be a link to the "about" and "policies" pages, https://radiopaper.com/about


Oh, there it is!


This seems more like a Facebook wall where you approve each post. The design looks good from what I can tell, not sure it you have apps or if this is open source, but in terms of making a company out of this it seems like little value compared to free alternatives that already exist. With email, I can already create white lists. Also, what is to stop a group of accounts from spamming someone with requests?

Mastodon let's you filter people and block them as well, and there are plenty of similar apps that provide actions to create and filter groups of conversations.


This is a fantastic idea. My only suggestion would be to make those features that are not immediately apparent more apparent. Even a simple “About” or “What is Radiopaper?” page would help a lot.


https://radiopaper.com/about

It's helpful feedback that this isn't as discoverable as it should be. If you hover your cursor towards the bottom of the explore page, some links show up.


This idea is good, design is awesome. Here is a little feedback

1. It needs categories. I don't feel motivated to read most of the conversations but if those were related to my interest i definitely would.

2. users should be able to add some bios, who are these people.

3. where it says you can start a conversation there could be a list of experts in my category i can send the question to, not popular people but people who have posted a lot in that category. this can be quora without the spam!


Thanks for the feedback!

1. We're certainly looking into this, and other approaches to content discovery.

2. You can already add bios! Go to to your user page (link in the top-right), and it should be clear how to add a bio.

3. Thanks for the suggestion, we'll definitely think about this more.


My personal opinion here. We (NodeBB) decided to go with categories from the get go because it was a form of hierarchy that worked well for YEARS, and you don't go breaking things that work fine.

IIRC Discourse started without categories. The whole "bucket of messages" shebang. It did not go well, in the sense that I believe people kept asking for categories.

There's something special about a curated list of folders to slot messages into, that tags and labels don't quite capture.


Or to put it simply, most people are disinterested in most things. If you present them with most things, they will be mostly disinterested.


A little defeatist, but correct in its own way.

I prefer to look at it as "each person is interested in different things, and they mostly don't overlap with any other one person, statistically".


Currently, the user is being presented with most things, so I think my framing matches the actual user perspective with better accuracy than a cheerier wording. ;)


Really interesting concept.

I think the previews on the thumbnails shouldn't be the beginning of the message. My reason is that it oftentimes contains longwinded introductions and I believe there could be more compelling text. Maybe that's just my impression, but I think one workaround is to use some "AI logic" to automatically summarize the thread. Just Duck Duck Go to "text summarization apis" and see.


How will you stop competitors from stealing your "secret sauce"? It seems fairly straightforward to program this into another more well-entrenched forum stack.

You could, for example, say that you could compete on other feature sets, but the same argument applies.

Reply-by-email? NodeBB has this (via SendGrid) and probably Discourse too.

Extensibility? Lots of forums support plugins.

Theming? Many forums are very customisable (or at least themeable)


I don't think it's important to worry about competitors stealing anything


I'm simply asking because if someone were to do a feature comparison between forum software, it usually comes down to how many checkboxes you can hit.

If your differentiating factor is something that is simple (and even mind blowing), it's especially easy for it to be stolen.


Really nice work on RadioPaper. I think I have been building something very similar - https://taaalk.co, which is a social network for closed (troll-free), but publicly readable, conversations.

I went with a very different design approach - a bit more slapstick; referencing the messaging apps we are all familiar with.


Ok, but what would be a widely accepted definition of trolling? Is it voicing unpopular/outrageous opinions that one doesn't zven truly believe in, fot the purpose of enticing third parties to engage in a shouting match?

If so, how does the obligation that there is a reply before publishing, prevent trolling?

The problem with trolls seems to be that they are very difficult to ignore.


Is this vulnerable to sock puppets? What if I run two accounts, post bait on one, reply with my troll reply through the other sock? Have I then succeded in creating 'troll theater' that is visible to 3rd parties (the real targets). I already encounter lots of reddit theater, which seem to be performed to influence 3rd party readers.


I'm not sure you can make a successful social network in 2022 without financial incentives for creators to make good content. Anyone good at making content online is getting paid for it. The novelty of putting things online for free wore off almost a decade ago. Platforms that don't have incentives yield to platforms that do.


The project reminds me a lot of https://letter.wiki


Love the design and after reading the post that @duck linked to it clicked. Not sure how to elevator pitch the concept more efficiently but the email interaction mechanism might be viral enough to not have to really sell people on the idea. Super cool, hope it keeps growing and you can't fit in the free tier anymore!


Hacker News has definitely pushed us out of the free tier for today, but these services are still impressively cheap. We may have a $5 cloud bill this month.


I must say I love the design of the site. It’s lightweight and optimized for reading.

Will their be groups of conversation threads like Reddit? How does one find interesting conversations to follow?

Right now there’s a list of conversations that one could browse through but I assume noise:signal ratio will increase as more users make threads.


Is this an option for blog comments or something that is standalone? I’m on mobile and the interface looks pretty, but seems a little strange. I’m being exposed to the middle of conversations and missing the lede. Like walking into a room and having no clue what 2 people are chatting about.


Looks like a cool idea, but I'm having trouble signing in. Clicking the link in the email just bounces me back to the website, where I'm still logged out.

I like the idea that messages only appear after they've been approved by the receiving party. That's pretty clever.


Sorry about that! Would you be able to let us know which browser and device you're using?

We're seeing some errors come in through Sentry about Firebase auth being unable to persist things in local storage, that may be what's going here. We'll keep digging!


Thank you! This is happening on iOS 14.5.1, in Safari. I’m on an iPhone XS.


Thanks again for the bug report. We've managed to get someone on the team to reproduce the issue, it seems login-by-email doesn't work correctly in Safari for iOS. We found that after you click the email login link, it'll appear you're logged out, but once you refresh the page, you'll be logged in.

We'll work on getting this squared away, but in the mean time, perhaps try the refresh- workaround; or a different browser or OAuth login method.


The core premise seems completely asinine and the format is eye rending.

Disallowing "dislikes" doesn't stop trolling (the recipient still receives the msg after all) "like" it or not, negative feedback is required for a healthy community moderated chat board.


Have ideas about UX and developer productivity? Message me on Radiopaper: https://radiopaper.com/user/K3ST04CaOZV4XwHThdMY3rOY8Rq2


Michael,

So glad that you're excited to use the platform.

Would you like a custom alias (radiopaper.com/MichaelLeonhard or similar)? At the moment we provision those manually, on-demand.

We saw your message to Socrates, I'm afraid he's not around to reply today, so it'll likely remain unpublished.


radiopaper.com/mleonhard would be nice. :)

I was hoping that https://radiopaper.com/Socrates is role-played by a trained living philosopher. That would be fun.


How can someone get involved in the project? Do you plan to hire soon? Do you plan to develop mobile apps?

Working on a project that aims to improve online discussions is my current long term career goal.

The "counterpart accepting your contribution" filter seems neat and simple.


A worthy goal indeed! We are very much looking at expansion of the team, as there are a number of additional features we'd like to roll out to strengthen the product (including a stronger mobile offering). We're exploring options for funding Radiopaper so we can kick it into high gear.


Please don't hesitate to get in touch. alaeri(at)gmail.com (android dev 8y+ experience).


Do you have plans for bots / bot networks that can simulate a conversation between two or more people? I do think the overall premise is interesting though and can see how it might deter undesired content from taking over your social network.


Thanks for the question!

There's certainly a risk that some kinds of abuse patterns will get through our reply-to-publish model. Eventually we'd like to try and detect bot traffic and challenge them with a CAPTCHA or similar.


It is an interesting concept. Are you going to focus on the people or the topics? If you lean more on people aspect then it will be a social network of some sort. If you focus on topics then it is more like a web forum.


All I get is a totally blank screen for any page on your site. Probably some script blocker or other security plugin. I'm using Waterfox. Does your site require a more "modern" browser?


I have my content blocker - 1Blocker - hiding comments for me by default. The site works, but the reply functionality does not. I’m on iOS so can’t easily test, but I assume that your reply div has a class or id containing the word ‘comment’.

Entirely my fault, not yours OP, but just FYI.


This is a fantastic idea. I wish you all the best of luck. One of those once in a decade "it's obvious when you hear it, but no one said it before" ideas for a big problem.


How do you plan to balance troll resistance with the perception of cancel culture, censorship, and anti-free speech that are echoing through the main social networks at the moment?


Does the perception of those things actually matter? From my entirely unscientific anecdata it seems people concerned about being "canceled" are toxic assholes that can't believe other people don't want to deal with their shit. No one is owed a platform to be a toxic asshole. Trolls don't deserve a response on someone's platform. They're all free to go start their own platforms and pay for the privilege of broadcasting their bullshit.


> toxic assholes that can't believe other people don't want to deal with their shit

To use Twitter as an example, there has been the "block" and "mute" buttons for quite a while. I dont know why ppl dont use these more. To me its far more likely that the cancellation intent is to prevent the so called troll from speaking to others.


Pretty sure it's not assholes but rather people with opinions that the others hate. Or is that what "toxic assholes" means to you? Is expressing an assholey opinion in a polite respectful way is still toxic assholery? If that's the case, then you need to step back and work out how you decide what opinions are assholey.


To use an extreme example, a polite Nazi is still a Nazi. No matter how respectful they might seem they are still at their core awful pieces of shit.

There's also toxic assholes just due to their obnoxious behavior.

Neither inherently deserves to be heard or be amplified. Nobody needs to put up with their shit.


How do you decide that a Nazi is a toxic asshole while other people aren't? I'm not asking about that specific type of person but about your general way of classifying them. Where's the boundary? If some new ideology comes along, how will you know how to classify its adherents?


> How do you decide that a Nazi is a toxic asshole while other people aren't?

Oh, that's easy. Advocating the wholesale extermination of people based on ethnicity is a strong signal you're a toxic asshole. If a new ideology comes along advocating the same thing...there's a good chance its adherents are also toxic assholes.

Is this even a serious fucking question?


What about Nazis who don't advocate for extermination of people based on ethnicity? If you associate yourself with the ideology, are you automatically guilty of advocating all its things even if you don't agree with all of them?

And yes, it's a serious question. Seems like you have a simple decision tree of ethnic extermination? -> Toxic asshole.


Yes, it almost led to a hostile takeover attempt of Twitter.


Our mechanism is designed to make these questions less urgent. In in-person conversations, people are usually able to act as informal moderators for each other. We hope that will work the same way on Radiopaper, rather than needing to rely on the decisions of those who operate the platform. If you don't like a message directed to you, just ignore it, and it won't be published.

There will of course still be cases where Radiopaper moderators need to step in. We have a content policy that outlines the reasons we might do so: https://radiopaper.com/policies. But our hope is that most of the worst stuff will just never get published, because it will simply be ignored.


A question on the services you mentioned: why do you use the premium offerings of Sentry, Mixpanel, GitHub etc already? Which features do you rely on the free plans don’t provide?


I still think a forced comment on downvote is the best improvement one could make to the hn comment system.

Same for reddit.

Using mail is interesting in the long term as I see SMTP replacing all IM protocols.


I always liked the "pay a penny a post, in advance." In advance means you buy 1000 posts in advance for $10 or so.

This would make armies of spambots uneconomic.


Have you tried Locals? Not a penny a post, but $5/month per board. Can't say there aren't any trolls, but it's definitely more "real" than most social media.


Thanks for the tip, I know nothing about Locals. $5/mo for unlimited use is a different incentive structure.


For spam prevention, what is the advantage of hard currency over a proof-of-work algo? I prefer to manage as few 3rd-party services (and fees!) as possible.


Just a heads up, apostrophes in names get encoded: https://i.imgur.com/DaJIxLF.png


oh snap! really appreciate the bug report. we'll fix it.


I found it quiet unapproachable to find people i might want to interact with. Can you describe how one is supposed to discover who one wants to interact with?


Thanks for taking a look! For now there are two ways to find an interlocutor:

1. Send a message to someone you already know, using their email address.

2. Comment on or start a conversation with someone whose post you find interesting.

We expect to add more features around discovering both users and conversations in the future.


> If you want to start a conversation with someone, enter their email address and type your message

I feel this may be abused in some form. good luck!


The login link from the email failed in Firefox (Librefox). Chrome worked. Cool idea! edit: More issues with Firefox and Librefox. :(


This is fucking cool. I love radio, and I love paper, what's not to like with radiopaper?! How much was the domain, $30000?


"><u> XSS by Bobby</u><marquee onstart='alert(document.cookie)'>XSS</marquee>


I really like this idea. However, I find the site very difficult to use because there is no subject listed for the conversations.


Is it possible to add font selection here? On my Ubuntu 20 system this very "fancy" looking font renders quite poorly.


Couple quick notes:

1. Your sign-up email hides the confirmation link in text/plain.

2. IMHO the big :::::::: header at the top is visually distracting.


1. oops. thanks for the report, we'll look into it

2. that's kind of our thing, we're gonna stick with it for a while longer :)


Thanks! Congrats on cultivating such a promising project!


It's weird that there's no way to pick a username - some users have GUIDs, others - usernames.


<noscript><p title="</noscript><img src=x onerror=alert(1)>">


I really like the idea. I really hope this and signal expand enough to kill Facebook.


this is the first modern social network i immediately signed up for. great job!

i am kinda confused on the email thing. when i use the email of someone i know and they are not on the network, do they still get the message in their email?


Yes! They will receive an email with the message, and some boilerplate explaining that it was sent from Radiopaper and that if they reply to the email the message will be published. They'll also receive a link to view the post on the site if they prefer.


The next social media unicorns will be baser around user controlled moderation.


pretty cool but will never use it until enough blue checkmarked twitter handles I follow are on it. If anything it will probably be replicated by existing social networks. Good work and its a neat idea


I don't get it. As if echo chambers weren't already bad enough.


I'm sorry but the whole concept looks stupid to me: I easily guess that there will not be useful conversation there. The system will be entirely controlled by confirmation bias in my opinion...


Cool idea... But how do you deal with sock puppets?


Love the aesthetic


"><img src=x onerror=alert(1)>


This is fantastic, the best of luck to you!


Blank page for me on Kiwi Browser.


Thanks, everyone, for your feedback and attention. We're grateful for all the suggestions, sign-ups, comments and criticisms. Here are a few things we want to add to the original post in response to your questions and feedback:

1. First, a clarification about our mechanism: * only the first message in a conversation or thread requires a reply or approval for publication. The algorithm is not "for every n messages in a conversation, publish n-1." The reason for this is discussed in some of your comments: if every message requires a reply to publish, then either party in a conversation can unilaterally end a conversation at any time. * Instead, once a conversation or a thread is published, either party can add messages to the conversation in any order. Does this mean that once you get past the first-message test, you can troll to your heart's content? Not really. Yes, you can keep publishing messages into the conversation. But unless your counterparty replies to you, additional messages will be considered "post-scripts." Post-scripts do not push a conversation to the top of any feeds, whether on a user page or on the Explore page. This will remain the case even when we have individual feeds and a more complex algorithm than simple reverse-chrono. What this means is that a conversation gets less and less discoverable the longer it goes without both parties jumping in.

2. Second, a clarification about our ambitions: * no system is perfectly troll-proof. We call Radiopaper "troll-resistant," rather than "troll-proof," for a reason. But we also believe that our level of troll resistance is sufficient to radically change the experience of using Radiopaper in the long term, vs other social sites. * two reasons for this: one is that most trolls are pretty lazy. Raising the bar even slightly eliminates 90%+ of the worst actors. More importantly, however, we reduce incentives for trolling. Right now a good way to gain clout on social networks is to post inflammatory or insulting replies. On Radiopaper, that will just get you ignored. On Radiopaper, the incentives are instead to strike a balance between engaging your interlocutor and engaging your audience. We believe this will eventually disproportionally attract the best minds on the internet to the site.

3. Third, a note about where we're headed: our next steps in the medium term are to add social features like following, notifications, reactions, and individual feeds. This will make Radiopaper a bit more like Twitter or Facebook, and a bit less like Reddit or HN. Our goal is to be more social network than forum.

4. Lastly, a call to action: if you like the concept and/or the design, we'd love it if you started a conversation! One important feature of the site is that you can converse on Radiopaper with someone purely via their email address. Only one person in a conversation ever even needs to log in or make an account. This feature is available at radiopaper.com/new. If you don't have any ideas for who to start chatting with, check out my user page: radiopaper.com/DavidSchaengold. I'd be happy to talk.


Relevant and timely. Bravo!


Thanks very much!


> The notification emails contain context that explains that if you reply to the email, your message will be published on https://radiopaper.com

This (or this part of it) is not so radically different from the concept of posting to a mailing list, and having that show up on the web archive.

Also, if you write to a moderated mailing list, your conversation is initially between two people: you and the moderator.

> more resistant to trolling and abuse, is that messages are not published until the counterparty replies or accepts your comment.

While that's certainly resistant to trolling, it's not resistant to the abuses. Certain users can favor certain other users and promote their comments while suppressing other comments.

If we think about HN for a moment, that's completely opposite; here you cannot even be downvoted by the counterparty to your comment (the user to whom you are responding). There is a good reason why it is that way.

Being able to veto people replying to you sounds like a good recipe for the nucleation of echo chambers.

I don't believe there is any magic formula that eliminates trolling, yet respects everyone's freedom. The closest thing is the model of Usenet, with personal kill files. Everyone has a right not to read what they don't want: what they find disruptive, annoying, offensive or whatever. Everyone also has a right not to have someone else prevent them from reading what they want. The only way to achieve that is for everyone to have their own personal filter. You get shadow banned when you annoy everyone, ending up in everyone's kill file. That is fair; you cannot point fingers at an unfair moderator. You were talking at a group of people, each of whom preferred not to listen. The weak spot in this idea remains, of course, uthentication; users changing their identity to evade being blocked by others.

An interesting concept might be bayesian filtering. Every forum participant has their own filter. Instead of blocking/killfiling users, you indicate to the system that some content is unwanted, and that trains your filter against that sort of content. A troll who uses multiple identities cannot evade that filter as easily, because to do that they have to change their content and style. And anyway, if thereby they cease trolling, that may well be fine.

Finally, even the perfect scheme still has the problem that people may be harming themselves if they shun whatever they disagree with. What is a "troll"? Sometimes it's actually just a bearer of an uncomfortable truth, perhaps without the best of manners. But are the manners the real reason you want to block the troll, or the uncomfortable truth?


[deleted]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: