Hacker Newsnew | past | comments | ask | show | jobs | submit | csnover's commentslogin

No. This is why salts[0] are used.

[0] https://en.wikipedia.org/wiki/Salt_(cryptography)


This is how it should be done. But it still doesn't protect users fully, because attacker can try to brute-force passwords their interested in. It requires much more effort though.


And compute-intensive hash functions. Computers this day are powerful enough to hashcat each individual pwd+salt if a fast hashing function is used.


Where I live, ILLs do not work for video games because the format identification for video games is “Electronic”, and their software is programmed to suppress the request button for these items because it is interpreted as “no physical media”. I emailed the people who run the system, they said it is a known issue, and as far as I can tell that just means they aren’t going to fix it, since it has been this way for at least three years.


btrfs is OK for a single disk. All the raid modes are not good, not just the parity modes.

The biggest reason raid btrfs is not trustable is that it has no mechanism for correctly handling a temporary device loss. It will happily rejoin an array where one of the devices didn’t see all the writes. This gives a 1/N chance of returning corrupt data for nodatacow (due to read-balancing), and for all other data it will return corrupt data according to the probability of collision of the checksum. (The default is still crc32c, so high probability for many workloads.) It apparently has no problem even with joining together a split-brained filesystem (where the two halves got distinct writes) which will happily eat itself.

One of the shittier aspects of this is that it is not clearly communicated to application developers that btrfs with nodatacow offers less data integrity than ext4 with raid, so several vendors (systemd, postgres, libvirt) turn on nodatacow by default for their data, which then gets corrupted when this problem occurs, and users won’t even know until it is too late because they didn’t enable nodatacow.

The main dev knows this is a problem but they do seem quite committed to not taking any of it seriously, given that they were arguing about it at least seven years ago[0], it’s still not fixed, and now the attitude seems to just ignore anyone who brings it up again (it comes up probably once or twice a year on the ML). Just getting them to accept documentation changes to increase awareness of the risk was like pulling teeth. It is perhaps illustrative that when Synology decided to commit to btrfs they apparently created some abomination that threads btrfs csums through md raid for error correction instead of using btrfs raid.

It is very frustrating for me because a trivial stale device bitmap written to each device would fix it totally, and more intelligently using a write intent bitmap like md, but I had to be deliberately antagonistic on the ML for the main developer to even reply at all after yet another user was caught out losing data because of this. Even then, they just said I should not talk about things I don’t understand. As far as I can tell, this is because they thought “write intent bitmap” meant a specific implementation that does not work with zone append, and I was an unserious person for not saying “write intent log” or something more generic. (This is speculation, though—they refused to engage any more when I asked for clarification, and I am not a filesystem designer, so I might actually be wrong, though I’m not sure why everyone has to suffer because a rarefied few are using zoned storage.)

A less serious but still unreasonable behaviour is that btrfs is designed to immediately go read-only if redundancy is lost, so even if you could write to the remaining good device(s), it will force you to lose anything still in transit/memory if you lose redundancy. (Except that it also doesn’t detect when a device drops through e.g. a dm layer, so you can actually ‘only’ have to deal with the much bigger first problem if you are using FDE or similar.) You could always mount with `-o degraded` to avoid this but then you are opening yourself up to inadvertently destroying your array due to the first problem if you have some thing like a backplane power issue.

Finally, unlike traditional raid, btrfs tools don’t make it possible to handle an online removal of an unhealthy device without risking data loss because in order to remove an unhealthy but extant device you must first reduce the redundancy of the array—but doing that will just cause btrfs to rebalance across all the devices, including the unhealthy one, and potentially taking corrupt data from the bad device and overwriting on the good device, or just losing the whole array if the unhealthy device fails totally during the two required rebalances.

There are some other issues where it becomes basically impossible to recover a filesystem that is very full because you cannot even delete files any more but I think this is similar on all CoW filesystems. This at least won’t eat data directly, but will cause downtime and expense to rebuild the filesystem.

The last time I was paying attention a few months ago, most of the work going into btrfs seemed to be all about improving performance and zoned devices. They won’t reply to any questions or offers for funding or personnel to complete work. It’s all very weird and unfortunate.

[0] https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg...


> The biggest reason raid btrfs is not trustable is that it has no mechanism for correctly handling a temporary device loss. It will happily rejoin an array where one of the devices didn’t see all the writes. This gives a 1/N chance of returning corrupt data for nodatacow (due to read-balancing), and for all other data it will return corrupt data according to the probability of collision of the checksum. (The default is still crc32c, so high probability for many workloads.) It apparently has no problem even with joining together a split-brained filesystem (where the two halves got distinct writes) which will happily eat itself.

That is just mind bogglingly inept. (And thanks, I hadn't heard THIS one before).

For nocow mode, there is a bloody simple solution: you just fall back to a cow write if you can't write to every replica. And considering you have to have the cow fallback anyways - maybe the data is compressed, or you just took a snapshot, or the replication level is different - you have to work really hard or be really inept to screw this one up.

I honestly have no idea how you'd get this wrong in cow mode. The whole point of a cow filesystem is that it makes these sorts of problems go away.

I'm not even going to go through the rest of the list, but suffice it to say - every single broken thing I've ever seen mentioned about btrfs multi device mode is fixed in bcachefs.

Every. Single. One. And it's not like I ever looked at btrfs for a list of things to make sure I got right, but every time someone mentions one of these things - I'll check the code if I don't remember, some of this code I wrote 10 years ago, but I yet to have seen someone mention something broken about btrfs multi device mode that bcachefs doesn't get right.

It's honestly mind boggling.


> The last time I was paying attention a few months ago, most of the work going into btrfs seemed to be all about improving performance and zoned devices. They won’t reply to any questions or offers for funding or personnel to complete work. It’s all very weird and unfortunate.

By the way, if that was serious, bcachefs would love the help, and more people are joining the party.

I would love to find someone to take over erasure coding and finish it off.


In my case it was a last-ditch effort to get them to explain what was keeping them from making raid actually safe. Others have offered more concrete support more recently[0], I guess you could try reaching out to them, though I suppose they are interested in funding btrfs because they are using btrfs.

I share the sentiments of others in this discussion that I hope you are able to resolve the process issues so that bcachefs does become a viable long-term filesystem. There likely won’t be any funding from anyone ever if it looks like it’s going to get the boot. btrfs also has substantial project management issues (take a look at the graveyard of untriaged bug reports on kernel.org as one more example[1]), they just manage to keep theirs under the radar.

[0] https://lore.kernel.org/linux-btrfs/CAEFpDz+R3rLW8iujSd2m4jH...

[1] https://bugzilla.kernel.org/buglist.cgi?bug_status=__open__&...


Well, bcachefs has the safe, high performance problem solved.

But I really just don't know what to do if technology has become a popularity contest instead of about the technology :)


The btrfs devs are mainly employed by Meta and SuSE and they only support single devices (I haven't looked up recently if SuSE supports multiple device fs). Meta probably uses zoned storage devices, so that is why they are focusing on that.

Unfortunately I don't think Patreon can fund the kind of talent you need to sustainably develop a file system.

That btrfs contains broken features is IMO 50/50 the fault of up-stream and the distributions. Distributions should patch out features that are broken (like btrfs multi-device support, direct IO) or clearly put it behind experimental flags. Up-stream is unfortunately incentivised to not do this, to get testers.


Patreon has never been my main source of funding. (It has been a very helpful backstop though!)

But I do badly need more funding, this would go better with a real team behind it. Right now I'm trying to find the money to bring Alan Huang on full time; he's fresh out of school but very sharp and motivated, and he's already been doing excellent work.

If anyone can help with that, hit me up :)


In my opinion, clap is a textbook example of over-engineering for a single metric (UX) at the expense of all other considerations (compilation speed, runtime cost, binary size, auditability, and maintainability). It is an 18kloc command-line parser with an additional 125kloc of dependencies that takes nearly 6 seconds to compile (‘only’ 400ms for an incremental build) and which adds nearly 690KiB to an optimised release binary (‘only’ 430KiB if you strip out most of the sugar that only clap provides).

There are many other command-line parsers to choose from that do all the key things that clap does, with half or less the build cost, and most of them with 30x less binary overhead[0]. argh is under 4kloc. gumdrop is under 2kloc. pico-args is under 700loc. What is the value of that extra 10kloc? A 10% better parser?

I am not saying there is no room for a library like clap—it is, at least, a triumphant clown car of features that can handle practically any edge-case anyone ever thought of—but if I got a nickel every time I spent 15 minutes replacing a trivial use of clap with pico-args and thus reduced the binary size and compile time of some project by at least 80%, I would have at least three nickels.

Just to try to pre-empt arguments like “disk space is cheap”, “compiler time is cheaper than human time”, etc.: there are no golden bullets in engineering, only trade-offs. Why would you default to the biggest, slowest option? This is the “every web site must be able to scale like Facebook” type logic. You don’t even have to do more work to use argh or gumdrop. If clap ends up having some magic feature that no other parser has that you absolutely need, you can switch, but I’ve yet to ever encounter such a thing. Its inertia and popularity carry it forward, but it is perhaps the last choice you should pick for a new project—not the first.

[0] https://github.com/rosetta-rs/argparse-rosetta-rs


According to that link you posted, many of the other argument parsers don't even generate help, only one other offers multiple interfaces, and none of the others are noted as having color or suggested fixes.

These aren't exactly esoteric features, and you're not going to get them for free. I'm happy to pay in space and compile time for clap to gain those features.

This isn't a case of the commandline app needing to be facebook, but rather putting the exponential gains we've made in storage space to good use providing features which should be table stakes at this point.


You’re right that there are only trade-offs in engineering. But the key to evaluating trade-offs is evaluating impact, and how long my dependencies take to compile when I first check out a repo or whether it takes 1ms or 2ms to parse my command line (if we’re even talking about something above microseconds) have no discernible impact for approximately all use-cases. If you’re making some odd CLI tool that has to run on an microcontroller with 1MB of RAM or something, fine, agonize about whether your command line parser is parsimonious enough. Otherwise you’ve abjectly failed to evaluate one of the most important trade-offs in engineering: whether something is even worth your time to think about.


> Otherwise you’ve abjectly failed to evaluate one of the most important trade-offs in engineering: whether something is even worth your time to think about.

Phew, several folks have replied to me about how it’s not worth the time thinking about these impacts at all, thus creating a paradox whereby more time has been spent thinking about writing about whether to think about it than has been spent in not thinking about it and just accepting that I wrote a reply on HN about how I feel there are more suitable command-line parsers than clap for most Rust projects! :-)

I agree that much of high-level engineering is knowing whether something is worth thinking about; in this case, I did the thinking already, and I am sharing what I know so others can benefit from (or ignore) my thinking and not have to do so much of their own. If my own personal anecdote of significantly reducing compile times (and binary sizes) by taking a few minutes to replace clap is insufficient, and if the aggregate of other problems I identified don’t matter to others, that’s alright. If reading my comment doesn’t make someone go “huh, I didn’t know {argh|gumdrop|pico-args} existed, clap does seem a little excessive now that you mention it, I will try one of these instead on my next project and see how it goes”, then I suppose they were not the target audience.

I don’t really want to keep engaging on this—as almost everyone (including me) seems to agree, command-line parser minutiae just aren’t that important—but I guess I will just conclude by saying that I believe that anchoring effects have led many programmers to consider any dependency smaller than, say, Electron to be not a big deal (and many think Electron’s fine too, obviously), whereas my experience has been that the difference between good and bad products usually hinges on many such ‘insignificant’ choices combining in aggregate.

Assuming whichever command-line parser one uses operates above a certain baseline—and I believe all of the argparse libraries in that benchmark do—it seems particularly silly to make wasteful choices here because this is such a small part of an application. Choosing wastefulness because it’s technically possible, then rationalising the waste by claiming it increases velocity/usability/scalability/whatever without actually verifying that claim because it’s ‘not worth thinking about’, seems more problematic to me than any spectre of premature or ‘unnecessary’ optimisation. I hope to find better ways to communicate this in future.


If someone cannot distinguish between the impact of choosing Electron over QT (or GTK or whatever) and the impact of choosing clap over argh, they were never going to make good engineering decisions to begin with. There’s no slippery slope here.


Hmm isn't optimizing to save 690KiB for an optimised release binary and getting incremental builds to be significantly less than 400ms actually much closer to the after-mentioned "every web site must be able to scale like Facebook” type logic" ?


No, it is the following the principle of YAGNI.


The “every website must scale like Facebook” mindset is premature optimization driven by hypothetical future needs exactly what YAGNI advises against. But in your case, you’re investing time upfront to avoid a heavier dependency that already works and has no clear downside for the majority of users.

If you don’t actually need ultra-small binaries or sub-200ms compile times, then replacing Clap just in case seems like a textbook case of violating YAGNI rather than applying it.


> But in your case, you’re investing time upfront to avoid a heavier dependency

This is very confusing to me. What of this API[0], or this one[1], requires “investing time upfront”? With argh, you already know how to use all the basic features before you even start scrolling. These crates are all practically interchangeable already with how similarly they work.

It is only now that I look at clap’s documentation that I feel like I might understand this category of reply to my post. Why does clap need two tutorials and two references and a cookbook and an FAQ and major version migration guidelines? Are you just assuming that all other command-line parsers are as complicated and hard to use as clap?

[0] https://docs.rs/argh/latest/argh/

[1] https://docs.rs/gumdrop/latest/gumdrop/


Neither of those libraries provide cross-shell completions, or coloured output, or "did you mean" suggestions, or even just command aliases, all of which I would consider basic features in a modern CLI. So you need to invest more time to provide those features, whereas they just exist in clap.

That's not to say that clap is always better, but it is significantly more full-featured than the alternatives, and for a larger project, those features are likely to be important.

For a smaller project, like something you're just making for yourself, I can see why you'd go for a less full-featured option, but given there's not much difference between clap and, say, argh that I feel like I'd get much benefit out of argh. If you're really looking for something simple, just use lexopt or something like that, and write the help text by hand.


In engineering, principles exist to guide you toward a preferred outcome. That’s what ultimately matters, not how many times you get to write YAGNI in your commit messages.


I am wondering how much of this can be mitigated by carefully designing feature flags, and make default feature set small.


Rust invites serious disregard for resources and time. Sadly, many will accept that invitation. But some won't.


> disregard for resources and time

There is a tradeoff between compile time and running time.

This matters for programs that run more often than they get compiled.


And you disregard user experience and other developer experience with your own custom parsing code. Acts as if there's no trade-off whatsoever in your own decision and my way is holier than thou in engineering is beyond sad.


I can understand this, if we were talking about JavaScript CLIs that requires GBs of dependencies. But 690KiB for modern computing is a drop in the ocean. It is not something you should base or make a consideration of unless you were doing embedded programming.

690KiB is a far compromise if Clap provided, for example, better performance or better code readability and organization. The benchmarks you provided shows the performance is practically the same which is close to no library usage.

I did do a bit of CLI work (I try to maintain https://github.com/rust-starter/rust-starter) and will always pick up clap. It just happens that even for the simplest CLIs out there, things can get hairy really fast. A good type interface that let me define my args as types and have extras on the fly (ie: Shell completion code) is worth the less than 1MiB overhead.


That 690KB savings is 1/97000th of the RAM on the machine I develop and run most of my Rust software on.

If I ever encounter a single howto or blog post or Stack Overflow answer that tells me how to use Clap to do something 5 minutes more quickly than with an alternative, it’s paid for itself.

Amdahl’s Law says you can’t optimize a system by tweaking a single component and get more than that component’s total usage back. If Clap takes 1% of a program’s resources, optimizing that down to 0 will still use 99% of the original resources.

It’s just not worth it.


At this point you're just flexing that you have 96GiB machine. (Average developer machines are more like 16GiB)

But that's not the point. If every dependency follows same philosophy, costs (compiler time, binary size, dependency supply chain) will add up very quickly.

Not to mention, in big organizations, you have to track each 3rd party and transitive dependency you add to the codebase (for very good reasons).


I can write and have written hand-tuned assembly when every byte is sacred. That’s valuable in the right context. But that’s not the common case. In most situations, I’d rather spend those resources on code ergonomics, a flexible and heavily documented command line, and a widely used standard that other devs know how to use and contribute to.

And by proportion, that library would add an extra .7 bytes to a Commodore 64 program. I would have cheerfully “wasted” that much space for something 100th as nice as Clap.

I’ve worked in big organizations and been the one responsible for tracking dependencies, their licenses, and their vulnerable versions. No one does that by hand after a certain size. Snyk is as happy to track 1000 dependencies as 10.


> No one does that by hand after a certain size

This is not true


96? It sounds more like 64 to me, which is probably above average but not exactly crazy. I've had 64 GB in my personal desktop for years, and most laptops I've used in the past 5 years or so for work have had 32 GB. If it takes up 1/4700 of memory, I don't think it changes things much. Plus, argument parsing tends to be done right at the beginning of the program and completely unused again by the time anything else happens, so even if the parsing itself is inefficient, it seems like maybe the least worrisome place I could imagine to optimize for developer efficiency over performance.


> I got a nickel every time I spent 15 minutes replacing a trivial use of clap with pico-args and thus reduced the binary size and compile time of some project by at least 80%, I would have at least three nickels.

Hahaha, awesome. Thanks for the pico-args recommendation.

It supports the bare minimum.

I sure would like deriving-style parsing and --help auto-generation.

I think deriving-style unavoidably causes build time and complexity.

But it could be done without the dependency explosion.

There's a list of options here:

https://github.com/rosetta-rs/argparse-rosetta-rs#rust-arg-p...

Among the ones you recommend, argh supports deriving, auto-generates --help and optimizes for code size. And its syntax is very comparable to clap, so migrating is very easy. gumdrop seems very similar in its feature set (specifying help strings a little differently), but I can't find a defining feature for it.


> Why would you default to the biggest, slowest option?

Because it's not very big, nor very slow. Why wouldn't you default to the most full-featured option when its performance and space usage is adequate for the overwhelming majority of cases?


> Why wouldn't you default to the most full-featured option when its performance and space usage is adequate for the overwhelming majority of cases?

This is the logic of buying a Ford F-150 to drive your kids to school and to commute to the office because you might someday need to maybe haul some wood from the home improvement store once. The compact sedan is the obviously practical choice, but it can’t haul the wood, and you can afford the giant truck, so why not?


> This is the logic of buying a Ford F-150 to drive your kids to school and to commute to the office because you might someday need to maybe haul some wood from the home improvement store once.

No, it's like buying the standard off the shelf $5 backpack instead of the special handmade tiny backpack that you can just barely squeeze your current netbook into. Yes, maybe it's a little bigger than you need, maybe you're wasting some space. But it's really not worth the time worrying about it.

If using clap would take up a significant fraction of your memory/disk/whatever budget then of course investigate alternatives. But sacrificing usability to switch to something that takes up 0.000000001% of available disk space instead of 0.0000001% is a false economy, the opposite of practical; it feels like a sister phenomenon to https://paulgraham.com/selfindulgence.html .


Well you hit the nail on the proverbial head. The compact will handle 99% of people's use-cases, the truck will handle 100%. People don't want the hassle of renting something or paying for help for the 1% of cases their compact wouldn't handle.

Believe it or not, I'm with you; I live somewhere where it's sunny all year round, so I get around with a motorcycle as my primary transportation year-round and evangelize them as the cheap alternative to people struggling with car-related payments. But no, my motorcycle isn't going to carry a 2x4. Someone who cares about supporting that, even if they only need to do so exceptionally rarely, is gonna buy a truck. And then they won't have the money to buy a motorcycle on the side.


Not sure why you’re being downvoted. I also don’t like oversized motor vehicles but I think the parable is sound;

If the effort of switching out when you need the last 1% is higher than whatever premium you will pay (compilation time/fuel cost) - especially as a small ongoing cost, people will likely choose it.

I’m not saying this as if its wisdom into the future, only in that we can observe it today with at least a handful of examples.


These are not even remotely equivalent scenarios. If I want to remove clap as a library, I just remove it. If I buy an F150 I now have spent a lot of money and it's mostly gone so replacing it is significantly more expensive. It also burns more fuel.


These decisions accumulate then all of a sudden you have a project that takes ten minutes to build for almost no benefit.


Maybe. If your build is too slow, fix it. But pre-emptively microoptimising your build time is as bad as pre-emptively microoptimising anything else. Set a build time budget and don't worry about it unless and until you see a risk of exceeding that budget.


This is an incredible exaggeration. The vast majority of these projects don't even approach it. I use clap in a number of projects and they compile in just seconds.


argh is not meant to be used with Cargo, it doesn't even have the capability to display the version.


Convenience always win. If we want smaller more purposefully built dependencies then we need better tooling that makes those choices convenient.


Clap has also great dev ux, so I wouldn't put maintainability as an expense.


I use clap everywhere for this, but I'm not sure that I agree that it has a "great" experience. It does the job and I've no need to reach for another tool. But it can be frustrating sometimes.


Well, I can only speak for myself


This is exactly what happens on VOGONS, but it relies on the community reporting violations, and I suspect this isn’t happening consistently. I’ve reviewed too many reports where multiple people had been breaking the rules for days before anyone bothered to flag a single post.

I don’t know what to do about how toxicity has become so normalised in online spaces that people don’t even bother to flag it. I don’t know how many times I’ve had to tell new members that VOGONS is not Reddit or Twitter and to not import antisocial behaviour from those places into this place. It is like fighting a losing battle with an invasive species.

Frankly, I also don’t know how to strike the right balance when it comes to long-time members who break the rules infrequently but consistently. Knowing that a forum regular is probably going to break the rules again in the future, but not for several months, doesn’t make for an obvious solution to me. So I try to do the best I can to remind people to do better, and hope the time-until-relapse goes up. But I am sure this negatively impacts other community members since they will see the same person doing the same thing again and wonder why nothing is being done about them, not noticing that the last incident was three or six or twelve months ago.


The best idea imho is the assumption that people have some intelligence. I know, it's an utopia. But, well... Give them a way to block users. Help law enforcement when something actually criminal happens. For the rest, just let it happen, and let the people deal with it. As in real life. There is no other chance. Making the masses even more stupid than they are today (by hiding reality from them) is not the way to go. It never was. Let people interact with each other on their own. There cannot be a higher instance which 'manages' that in some way. Censorship isn't better when you call it 'moderation'. And don't declare everything as a harassment or worse. It's just a facet of communication to also communicate unfortunate things. People are veeeery sensitive plants since some time.

And again, for actual crime, it's not the forum moderators at all who should lead some actions. That would really be problematic.


> Frankly, I also don’t know how to strike the right balance when it comes to long-time members who break the rules infrequently but consistently.

Suspend them for a month the first time, ban them outright the second. Sounds like they're not interested in listening to reason. Also pour encourager les autres, etc.


Three-strikes-style rules like this make for easy decision-making, but they don’t make for good decision-making. They eliminate curiosity about the root problem, reject the reality of how humans learn (it is almost never linear), and usually end up backfiring catastrophically at some point. Compliance by fear does not make for open and healthy communities.

The fundamental attribution error makes us believe the problem must be that this kind of person is “not interested in listening to reason”, but I can tell you that the ones who aren’t interested in listening make that abundantly clear when you ask them to stop. They are not the ones I struggle with. The problem cases are those who act in good faith but have trouble regulating their emotions, infrequently enough to not be an obvious menace, but consistently enough that I recognise their names.


> They eliminate curiosity about the root problem

Sure but they also give consistency, avoid the sheen of "you can behave badly if you do it infrequently", and avoid the kind of loophole lawyering you often get when rules aren't consistently and stringently applied.

> usually end up backfiring catastrophically at some point.

Having been in many forums where bad behaviour[0] was not rooted out at source immediately and forcefully, I can say from my experience that also ends up badly. Perhaps not for the people in charge and their favoured brethren, I'll grant you.

[0] Including my own on occasion, I am ashamed to say with hindsight.


Why so much emphasis on rules? If someone is clearly doing something wrong, you don't need a rule violation. If someone is not clearly doing something wrong, then don't ban them. This way, there is no possibility for rules lawyering.


The goal of an online community is to be a space where people interact, so banning someone from interacting is a failure to succeed at the goal, but so is allowing someone to remain to the point where others choose to leave voluntarily.

The angst comes from the grey area between “do nothing” and “ban forever”. How does one strike a balance between the health of the overall community against the need of the individual to be able to take risks and make mistakes? A simple binary doesn’t cut it.


You give them warnings, time-limited bans, etc. I wasn't trying to say it has to be binary, I was saying the rules are given too much emphasis by some people when they really don't matter at all.

E.g. I got banned from Stack Overflow for deleting my own comments in protest of their new policies, which is explicitly allowed by the rules, but hinders their new project of selling site dumps to AI companies, therefore it merits punishment, regardless of what the rules say. And it's not unique to SO - every community is like this - so we should probably acknowledge it.


> I don’t know what to do about how toxicity has become so normalised in online spaces that people don’t even bother to flag it.

You find a new moderator who won't stand for it. They start escalating moderation action, and the wider moderation team follows their example

If you're at the point where your community is so toxic that you can't find a moderator... you need a ban wave. Good luck


Sorry for the confusion. I was referring to other online spaces like Twitter and Reddit where the discourse today is so toxic that it has desensitised people into accepting or even encouraging abusive behaviour. Name-calling is a typical example. It’s something that should register as abusive but doesn’t for most people any more, because in these very high-profile online spaces it’s not just normative, it’s actively rewarded with likes and upvotes. It’s probably impossible to not have that seep into most people’s baseline expectations for online conduct.

An army of dictatorial tone-policing moderators won’t create a safe space free from abuse, just a different kind of toxic space. So it is a very hard problem, not solvable by just kicking out a couple of bad apples. We are all swimming in a sea that causes the apples to rot.


... or you just finally realize that interaction beyond your own bubble is more complicated, harder, and more tiring, but not toxic. People are very very sensitive plants nowadays. It doesn't help. Don't try to hide people even more from reality. You did that for long enough, obviously. Don't make them even more stupid by declaring everything as toxic harassment which challenges their simple, tiktok-driven minds. We already have enough of those zombies around, who in a few decades are planned to acquire my pension.


Forum admin here. No posts were reported or removed and the author hasn’t posted since last April. I don’t know what’s going on here and have reached out to them to try to understand and assist. The community standards explicitly prohibit off-site harassment but unless someone tells us, there’s nothing I can do about it.


By the time someone has entered the workforce it should not be necessary to be ‘checked on’ by some surrogate parental figure.

In a functioning workplace, everyone agrees to do their work because it is part of the social contract of working on a team. They don’t need to be told what to do. If someone is falling behind, they’ll talk to other team members and work together to get back on track. You are there to help your coworkers, and they are there to help you. Someone doesn’t have to be your manager to make sure projects run smoothly; everyone can take turns in this role if they feel like it’s something they’re interested in doing and are competent in that role.

It’s only through the distorted lens of corporate ladder climbing and backstabbing departmental politics that the idea arises that you’ll just hire untrustworthy people and then beat them into submission by making a workplace into a prison.


>”By the time someone has entered the workforce it should not be necessary to be ‘checked on’ by some surrogate parental figure.”

Hire some recent grads and then make that statement. Not all require checking in on, but often entry-level, junior folks do. Don’t ignore them and don’t hold it against them. You, I’m sure, had someone checking in on you when you first started. Not in a “Are you at your desk” way but a “Are you able to complete the tasks? If not, how can I help you?”

Sadly the last few years with remote work and layoffs have made it so companies get rid of those who are “just doing the job” and keeping those that are “always the job”. Brutal.


> In a functional workplace...

This is assuming a lot. Whose responsibility is it to get to a "functional workplace" and whose responsibility is it to fix it if it's not functional?

And assuming that there is a "social contract" that everyone that's in the workplace buys into is also assuming a lot.


Sounds disconnected from reality. And it’s not the workplaces fault, it’s human natures fault.


Its amazing how much people think orgs like the church, government, business are some naturally malevolent or benign thing.

Just because they have been given some set of rights and social power.

Weve seen how they organize and adapt are what classifies the innaye "good"


On the one hand, yes, of course, and on the other hand on articles about "double employment" you'll see comments basically arguing that the company deserves it if your manager isn't micromanaging you and tracking metrics enough to spot that you're phoning it in.


And what about all the bs artists, slackers, divas, or assholes on the team? What if no one on the team wants to play “manager”, especially without the title? What if no one on the team has the skills to be an effective manager? What if my colleague Bob decides to play the role of a manager but I don’t like him? What if two people want to manage and disagree on things? Who exactly is supposed to have the authority to tell people what to do, and then hold them accountable?

What you described smells like communism - might work on small scale in some isolated cases, but usually doesn’t.


This isn’t realistic thinking. The above doesn’t exist anywhere.


The above definitely exists all over. The group usually hires based on personality. Generally these organizations are very centered and know what qualities they are looking for in a a team mate.


small teams generally self-organise under a single leader, eg sports teams and it is often the other members of the team who self-check as they have to take up the load for anyone who does not perform.


Uunfortunately, few.projects.exist where.these pods can remain isolated.


There is a podcast miniseries called The Divided Dial[0] that answers the question of why conservative radio dominates in America, but very briefly, based on my understanding of their reporting:

1. The elimination of the Fairness Doctrine meant that radio stations no longer had a legal obligation to provide a fair reflection of differing viewpoints on matters of public importance;

2. The elimination of national ownership caps in the 1996 Telecommunications Act enabled a rapid and extreme consolidation of radio stations;

3. These new national radio conglomerates slashed costs by vertically integrating production, creating fewer shows, and rebroadcasting them to all their owned stations;

4. The concept of “format purity” spilled over from music radio into talk radio, causing commercial talk stations to switch from showcasing a variety of opinions to airing one political perspective all day;

5. The conservative talk radio format was perceived as less risky by radio executives, and so that was the format that commercial talk radio switched to.

Air America may have eventually succeeded despite its many other flaws—except they owned no radio stations of their own, so there was no place for them to go in this hyper-consolidated, format-pure commercial market.

[0] https://www.wnycstudios.org/podcasts/otm/divided-dial


It’s not making up history. There is a documentary[0] on the history of React where the people involved in its creation and use at Facebook described it this way:

> Bolt [a predecessor of React] was basically more or less Facebook's implementation of a client-side MVC. [It was] not a tool belt, it was truly an application development framework. Something designed and meant to build complicated interactive rich apps and was being used to build pretty complicated very real products at Facebook at the time. […] As the product itself got more complex and as we added more engineers to the team, we didn't hit a wall but it started to get really, really hard to make changes. And that was around the time that Jordan [Walke, creator of React] was on the ads team and he's like ‘I wonder, there's got to be a better way’. […] Jordan was a product engineer at the time, working on ads, and ads has one of the most complicated pieces of UI across all of Facebook at the moment. On the ads team they were hitting the limits of what you can do without React complexity wise. […] Jordan had a lot of very interesting ideas around how you could take what we had done in Bolt and make it easier for it to scale with people's ability to understand large applications.

As the GP said, React was explicitly designed to solve the problem of having many engineers writing large, complex, client-side applications. It was not designed for building simple web sites.

[0] https://www.youtube.com/watch?v=8pDqJVdNa44


Your quote doesn't state anything to support what the OP said. It doesn't even say what you said it says.

Here's Pete Hunt talking about static webpages with React, in a best practices talk from 2013: https://youtu.be/x7cQ3mrcKaY?t=1528

There is no need to imbue meaning into quotes or history when there is a clear, well defined timeline with striking examples of how React was used to generate static website all the way from its initial release.


This doesn’t make sense to me. You can perform the exact same fingerprinting by looking at which image in a srcset is being requested.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: