If you don't know where they are, it's probably because they don't want you to know where they are.
The fact is that nuanced discussion doesn't scale. It requires a small core of dedicated users that can't get drowned out by dross (e.g. rabid Twitter users that collectively gish gallop). Broadcasting the existence of any of these communities is an almost guaranteed path to destroying the essence of what makes them successful communities in the first place.
I'm curious about the exact mechanism for this. Is it typically a single short-duration event that has lasting effects that destroys the community (e.g. a news article that drives traffic 100x what's normal to the site)? A permanent change in the environment (eternal September)? Or just the steady, accelerating accretion of new members, which at some point overwhelms the ability of the community to incorporate them to existing norms?
Or is it more that once a community gets to a certain size, it's subject to a kind of broadcast storm?
I wonder if a technical solution could mitigate against those risks. You could limit the number of public access tokens that can be issued at any one time, and have strategies in place to allow more long-lived tokens and registration for regular visitors.
I’ve been on HN since it was around 500 active users. Probably earlier than that.
Quality decreased as numbers increased. Pg spent much time worrying about this.
Ultimately bookface is where the interesting discussions are now, I would imagine. Not that HN isn’t interesting — it is — but it’s different than it was. I think few people would argue the opposite (partly because there are few people left from those early days).
It’s no coincidence that you have to be a founder to access bookface. One YC founder commented with surprise when he saw me on HN, saying it looked exactly like bookface. Presumably he spends his time there and not here.
There's something ironic about a forum closing down because the author's pseudonymity was threatened, endangering his IRL business dealings; and as an alternative people talking about a forum you can only get into with specific IRL business dealings, and without pseudonymity...
-> people with no prior experience with X join (X is cool now and they want to learn)
-> people who care more about the community than about X join
The problem is that the lowest common denominator keeps getting lower as you increase the size of the community. If you want a tight knit community then you need some kind of entrance criteria or at least ensure that new users are moderated (basically impossible with open registration).
I've heard this hypothesis that groups tend to go down in quality for this reason --
1. Group has an average quality.
2. People who have a much higher quality tend to avoid the group (e.g. that discussion is fallacious)
3. People who have lower quality are incentivized to join the group.
4. Eventually, the best performers of the group have less incentive to stay in that community
lobste.rs is a site similar to HN where users must be invited by another user and the public tree of invites is kept as a sort of reputation map. Might be similar to what you’re envisioning.
This seems like a fair point, and it makes me sad. I guess to the second part of my question: How would you build a community (or set of communities and identity systems) at scale that doesn't suffer from this? Such that you could point someone to it without destroying it.
I think you'd want a comment ranking system that rewarded people for voting according to the thought that went into a comment rather than whether you happen to agree with the comment.
Maybe a two dimensional voting system, one for "quality" and one for "agreement" (literally a 2D voting arrow widget? up-right means "high quality and I agree", down-right means "low quality but I agree", etc).
It would then be pretty simple to see who's "quality" and "agreement" votes don't strongly correlate, as well as who writes quality comments, and weight their votes more heavily.
If you only had 1 dimension voting like everywhere else, then I'm not sure how to do it, but maybe it's possible.
I've always thought Slashdot's old moderation system was pretty well designed. It's been years since I've been there, but as I recall, there were several categories of upvote -- Interesting, Informative, and Funny come to mind. There was a meta-moderation system to calibrate the moderators. The site was better than HN at showing only upvoted comments by default, in case you just wanted to see the highlights. (Oh, and I think it would send email for belated replies, making it better for ongoing discussion -- something I've definitely missed here at HN.)
Although the site is pretty dead now -- it doesn't have HN's advantage of a wealthy benefactor keeping it ad-free -- its original incarnation had some good ideas.
There are users who don't care about the voting system the same way the designer of the voting system intended. The only way to enforce this would be by appointing a team of moderators that read every comment and rate it themselves. Obviously this is problematic because users might perceive the moderators to be authoritarian and the potential for abuse is pretty high. However, this could definitively increase the maximum size of a high quality community to something like 10000 users but it's still far away from 1 million or more users.
I think that can be solved if a majority of early users use the voting system in the way it was intended. Say there's a cluster of 75% "good" initial users who vote similarly on quality. Any new users that vote similarly to them on quality would also be classified as "good" and be given more weight.
Even if the majority of new users don't use the voting system correctly the system could be weighted more heavily to the group of existing and new users who agree on what quality is, even if it ends up being a minority of users.
Actually, I think even if a majority of users don't vote in the way that was intended from the beginning you might still be able to handle it because you can throw away users whose "quality" and "agreement" votes are most correlated, which I think is the most likely way users would deviate from the intended voting system.
Now, if you had a majority of users who all voted the same but were not correlated with "agreement" (e.x. "vote according to the day of the week" or something), or if only very few users were voting correctly, then it might be difficult to distinguish, but that seems unlikely.
Moderation and a moat. Metafilter has survived for a very long time with (a) a $5 charge to create an account and (b) a 24-hour cooling off period, so you couldn't make an account to do a driveby comment on a thread.
It turns out that you really can't have both community and scale. The attempts end up aging differently, but inevitably, to something that the core membership doesn't want to be a part of and thus the predicates that enabled its existence stop. Its A people move on and the B people move in. This is true of every human group, subculture, or social phenomena really.
I strongly recommend reading Clay Shirky's commentary on the nascent phenomena of "social software" back in 2003:
The Decline, the formation of cliques and factions, incidents of abuse, of intellectual violence and namecalling. The software becomes encrusted with patches and extensions, the unwritten rules are flouted regularly and the meta-rules all but forgotten. It is a time of either shrinking membership, or overwhelming growth.
The Fall, an incident, whether social or technical that makes everybody realize that things aren't like they used to be. It usually leads to a revision or addition to the software as this is the easiest thing to fix.
The fact is that nuanced discussion doesn't scale. It requires a small core of dedicated users that can't get drowned out by dross (e.g. rabid Twitter users that collectively gish gallop). Broadcasting the existence of any of these communities is an almost guaranteed path to destroying the essence of what makes them successful communities in the first place.