Hacker Newsnew | past | comments | ask | show | jobs | submit | cloogshicer's commentslogin

FYI, this crashes the tab in the latest iOS 18.5 Safari and Chrome.


Perpetually reloads in Brave on iOS (never resolves).


Confirm, same for me


Took out Waterfox mobile too, but was strangely okay after reload.


There is a major difference at the call site.

try/catch has significantly more complex call sites because it affects control flow.


That is just a syntactic sugar difference, you could have exactly the same call site structure if you wanted in a language.


Can someone point to a real life example or tutorial/guide of the ECS architecture he proposes?

I'd like to learn more about how to implement this.


The two biggest for-sale engines have their implementations as well as what others have posted.

Unity ECS (has a pretty good general introduction to ECS) https://docs.unity3d.com/Packages/com.unity.entities@1.3/man...

Unreal https://dev.epicgames.com/documentation/en-us/unreal-engine/...


While people are likely to hand useful links or videos, I will attempt another method of understanding.

When it comes to organising your code in an ECS-esque fashion, it is much closer to normalising a database except you are organising your structs instead of tables.

With databases, you create tables. You would have an Entity table that stores a unique Id, and tables that represent each Component.. which there would be a EntityId key, etc.

Also, each table is representative of a basic array. It is also about knowing a good design for memory allocation, rather than 'new' or 'delete' in typical OOP fashion. Maybe you can reason the memory needed on startup. Of course, this depends on the type of game.. or business application.

An 'Entity' is useless on its own. It can have many different behaviours or traits. Maybe, if referring to games, you can have an entity that:- has physics, is collidable, is visible, etc.

Each of these can be treated as a 'Component' holding data relevant to it.

Then you have a 'System' which can be a collection of functions to initialise the system, shutdown the system, update the system, or fetch the component record for that entity, etc.. all of which manipulates the data inside the Component.

Some Components may even require data from other Components, which you would communicate calling the system methods.

You can create high level functions for creating each Entity. Of course, this is a very simplified take :-

  var entity1 = create_player(1)
  var boss1 = create_boss1()

  function create_player(int player_no) {
    var eid = create_entity();
    physics_add(eid);             // add to physics system
    collision_add(eid);           // add to collision system
    health_add(eid, 1.0);         // add health/damage set to 1.0
    input_add(eid, player_no);    // input setup - more than 1 player?
    camera_set(eid, player_no);   // camera setup - support split screen?

    return eid;
  }

  function create_boss1() {
    var eid = create_entity();
    physics_add(eid);
    health_add(eid, 4.0)          // 4x more than player
    collision_add(eid);
    ai_add(eid, speed: 0.6, intelligence: 0.6);  // generic AI for all

    return eid;
  }


So global functions to configure functionality defaults and the functionality may be reconfigured later.


1. You have entities, which may just be identifiers (maybe a u64 used as an index elsewhere) or some more complex object.

2. You have components, which are the real "meat and potatoes" of things. These are the properties or traits of an entity, the specifics depend on your application. For a video game or physics simulator it might be velocity and position vectors.

3. Each entity is associated with 0 or more components.

4. These associations are dynamic.

5. You have systems which operate on some subset of entities based on some constraints. A simple constraint might be "all entities with position and velocity components". Objects lacking those would not be important to a physics system.

In effect, with ECS you create in-memory, hopefully efficient, relational databases of system state. The association with different components allows for dynamically giving entities properties. The systems determine the evolution of the state by changing components, associating entities with components, and disassociating entities from components.

The technical details on how to do this efficiently can get interesting.

Compared to more typical OO (exaggerated for effect), instead of constructing a class which has a bunch of properties (say implements some combination of interfaces) and manually mixing and matching like:

  Wizard: Player
  FlyingWizard: Wizard, Flying
  FlameproofWizard: Wizard, Flameproof
  FlyingFlameproofWizard: Wizard, Flameproof, Flying
Or creating a bunch of traits inside a god object version of the Wizard or Player class to account for all conceivable traits (most of which are unused at any given time), you use the dynamic association of an entity with Wizard, Flying, and Flameproof components.

So your party enters the third floor of a wooden structure and your Wizard (a component associated with an entity) casts "Fly" and "Protection from Elements" on himself. These associate the entity with the Flying and Flameproof components (and potentially others). Now when fireball is cast and the wizard is in the affected area, he'll be ignored (by virtue of being Flameproof) while everything around him catches fire, and when the wooden floor burns away the physics engine will leave him floating rather than falling like his poor non-flying, currently on fire Fighter compatriot.


It's a bit of a long read, but I think the best introduction is still this [0] and the comments were here [1]. Yes, it's presented in the context of rust and gamedev, but ECS isn't actually specific to a particular programming language or problem domain.

[0]: https://kyren.github.io/2018/09/14/rustconf-talk.html

[1]: https://news.ycombinator.com/item?id=17994464


They’re very common in video game programming and visual effects and uncommon elsewhere. I enjoyed this article, though it’s still about using ECS in a simulation / computer graphics context.

https://adventures.michaelfbryan.com/posts/ecs-outside-of-ga...


Basically is programming against composable interfaces like COM, Objective-C protocols, and anything else like that, but sold in a way that anti-OOP folks kind of find acceptable, while feeling they aren't using that all mumbo jumbo bad OOP stuff some bad Java teachers gave them on high school.

Most of them tend to even ignore books on the matter, like "Component Software: Beyond Object-Oriented Programming." [0], rather using some game studios approach to ECS as the genesis of it all.

[0] - https://openlibrary.org/books/OL3564280M/Component_software


None of that is what people mean by ECS. ECS is a poor man's relational database. Do you think SQL is OOP too?


People without CS background say a lot, like confusing ECS with data oriented design.


I have a PhD in computer science and I'm also able to stop for two seconds and understand what ECS as used by game devs means. It is pointed out in the talk that the design pattern was used in sketchpad of all things and reinvented in 1998. They call that pattern ECS. It is unfortunate the name is overloaded but that doesn't mean that when they say ECS they're referring to the ECS you are and it is somehow a gotcha since that other ECS is still very much OOP.

They are not talking about that ECS.


Then lets sort this out, point an github repo of your choice for a game engine using ECS and lets discuss the implementation from CS point of view regardling programming language features used for the implementation.

Given that both of us have the required CS background should be kind of entertaining.


Sure, the Bevy engine is the one I'm most familiar with:

- An entity is a 64 bit integer wrapped in a struct for typesafety (newtype pattern). This is a primary key.

- A component is a struct that implements the "component" trait. This trait is an implementation detail to support the infrastructure and is not meant to be implemented by the programmer (there is a derive macro). It turns the struct into a SoA variant, registers it into the world object (the "database") plus a bunch of other things. It is a table.

- A query is exactly what it sounds like. You do joins on the components and can filter them and such.

- A system is just code that does a query and does something with the result. It's basically a stored procedure.

It is a relational database.

EDIT: forgot to link the relevant docs: https://docs.rs/bevy/latest/bevy/ecs/component/trait.Compone.... It is really critical to note a programmer is not expected to implement the methods in this trait. Programmers are only supposed to mark their structs with the derive macro that fills in the implementation. The trait is used purely at compile time (like a c++ template).

There's also flecs which doesn't rely on OOP-ish traits in its implementation: https://www.flecs.dev/flecs/

Either way, it doesn't matter if OOP is used in the implementation of an ECS, just as it doesn't matter if MySQL uses classes and objects to implement SQL.


Here's something that I think is in the direction Casey advocates for without being full-blown ECS:

https://gamedev.net/blogs/entry/2265481-oop-is-dead-long-liv...

As I posted on the video itself: https://news.ycombinator.com/item?id=44611240



Thank you all for your recommendations!


Look for data-oriented design [0] (not to be confused with domain-driven design) in addition to ECS.

[0] https://www.dataorienteddesign.com/dodbook/


I think what people really mean when they say "This can't be tested" is:

"The cost of writing these tests outweighs the benefit", which often is a valid argument, especially if you have to do major refactors that make the system overall more difficult to understand.

I do not agree with test zealots that argue that a more testable system is always also easier to understand, my experience has been the opposite.

Of course there are cases where this is still worth the trade-off, but it requires careful consideration.


This is the case.

I did a lot of work on hardware drivers and control software, and true testing would often require designing a mock that could cost a million, easy.

I've had issues, with "easy mocks"[0].

A good testing mock needs to be of at least the same Quality level as a shipping device.

[0] https://littlegreenviper.com/concrete-galoshes/#story_time


I've had a lot of success writing driver test cases against the hardware's RTL running in a simulation environment like verilator. Quick to setup and very accurate, the only downside is the time it takes to run.

And if you want to spend the time to write a faster "expensive mock" in software, you can run your tests in a "side-by-side" environment to fix any differences (including timing) between the implementations.


It's cool to learn about Verilator: I've been proposing our HW teams give us the simulations based on their HW design for us to target with SW, but I am so out of the loop on HW development, that I can't push them in this direction (because they'll just give me: "that's interesting, but it's hard", which frustrates me to no end).

Can you perhaps do a write-up of what you've done and how "slow" it was, and if you've got ideas to make it faster?


The hardest part is toolchains, for two reasons. First, Verilator doesn't have complete SV language support, although it's gotten better. Second, hardware has a tendency to accumulate some of the most contorted build systems I've ever seen and most hardware engineers don't actually know how to extricate it.

Once it's actually successfully run through Verilator, it's a C++ interface. Very easy to integrate if your sim already has a notion of "clock tick."


I like to put it on its head: a proper fake for any component should be designed by the authors of the component: they can provide one to the same behaviour relatively cheaply.

With hardware, I try to ask for simulated HW based on the designs, but I usually don't get ever it.


If I'm working next to the hardware group, I generally write my own. this allows me to make progress on drivers/firmware before hardware is available. if its an asic we can even spend a little time making it run in the DV environment - they get vectors for free and overall we get more confidence that the firmware/driver is going to work on delivered silicon.

if something doesn't work on actual hardware, now we're in a really good place to have to a conversation. clearly the simulator differs from the actual design, and we can just focus on sussing that out. otherwise the conversation can be alot more difficult and can devolve into 'hardware's broken' vs 'software person doesn't have a clue'.


> A good testing mock needs to be of at least the same Quality level as a shipping device.

unfortunately I disagree - it needs to be the same quality as the device. If youe mock is reliable but your device isn't, you have a problem.


Good point.

That was actually sort of the problem I had, in my story.


Sounds like you’re responding to the title without listening to the presentation. He literally says this in the intro.


It's often shorthand for "this cant be unit tested" or "this isnt dependency injected" even though integration tests are perfectly capable of testing non-DI code.

The author's claims that we should isolate code under test better and rely more on snapshot testing are spot on.


> rely more on snapshot testing are spot on

Never quite liked "snapshot testing" which I think has a better name under "golden master testing" or similar anyways.

Reason for the dislike, is that it's basically a codified "Trust me bro, it's correct" without actually making clear what you are asserting with that test. I haven't found any team that used snapshot testing and didn't also need to change the snapshots for every little change, which obviously defeats the purpose.

The only things snapshot testing seems to be good for, is when you've written something and you know it'll never change again, for any reason. Beyond that, unit tests and functional/integration tests are much easier to structure in a way so you don't waste so much time reviewing changes.


The purpose of snapshot testing is to not have observable changes if you think there shouldn't be observable changes. To that end, a pattern I like is:

- Don't store/commit the snapshot and have an "update" command. Your CI/CD should run both versions of the software and diff them. That eliminates a lot of the toil.

- You should have a completely trivial way to mark that a given PR intends to have observable changes. That could be a tag on GitHub, a square-bracket thing in a commit message, etc. Details don't matter a ton. The point is that the test just catches if things have changed, and a person still needs to determine if that change is appropriate, but that happens often enough that you should make that process easy.

- Culturally, you should split out PRs which change golden-affecting behavior from those which don't. A bundle of bug fixes, style changes, and a couple new features is not a good thing to commit to the repo as a whole.

The net effect is that:

1. Performance improvements, unrelated features, etc are tested exactly as you expect. If your perf enhancement changes behavior, that was likely wrong, and the test caught it. If it doesn't, the test gives you confidence that it really doesn't.

2. Legitimate changes to the golden behavior are easy to institute. Just toggle a flag somewhere and say that you do intend for there to be a new button or new struct field or whatever you're testing.

3. You have a historical record of the commits which actually changed that behavior, and because of the cultural shift I proposed they're all small changes. Bisecting or otherwise diagnosing tricky prod bugs becomes trivial.


>I haven't found any team that used snapshot testing and didn't also need to change the snapshots for every little change, which obviously defeats the purpose

I dont see how this even defeats the point, let alone obviously.

If a UI changes I appreciate being notified. If a REST API response changes I like to see a diff.

If somebody changes some CSS and it changes 50 snapshots, it isnt a huge burden to approve them all and sometimes it highlights a bug.


> I dont see how this even defeats the point, let alone obviously.

You generally don't want to have to change all the tests for every change, particularly implementation details. Usually when people do snapshot testing of for example UI components, they serialize the entire component then assert the full component is the same as the snapshot, so any change requires the snapshot to be updated.

> If somebody changes some CSS and it changes 50 snapshots, it isnt a huge burden to approve them all and sometimes it highlights a bug.

Lets say person A initially created all these snapshots, person B did a change that shows 50 snapshots changed, who's responsibility is it to make sure the snapshots are correct? Person A doesn't have the context of the change, so less ideal. Person B doesn't know the initial conditions Person A had in mind, so also less ideal.

When you have unit tests and functional tests you can read through the test and know what the person who wrote it wanted to test. With snapshots, you only know that "This was good", sometimes only with the context of the name of the test itself, but no assertions you can read and say "Ah, X still shows Y so all good".


>You generally don't want to have to change all the tests for every change

You generally don't want every change to result in a lot of work. If changing a lot of tests means looking at a table of 30 images and diffs, scanning for problems and clicking an "approve button", that isn't a lot of work though.

>Lets say person A initially created all these snapshots, person B did a change that shows 50 snapshots changed, who's responsibility is it to make sure the snapshots are correct?

The person who made the change.

>Person B doesn't know the initial conditions Person A had in mind, so also less ideal.

Yes they will because the initial conditions also had a snapshot attached. If your snapshot testing is even mildly fancy it will come with a diff too.

>When you have unit tests and functional tests you can read through the test and know what the person who wrote it wanted to test. With snapshots, you only know that "This was good",

If you made a change and you can see the previous snapshot, current snapshot and a diff and you never know if the change was ok then you probably shouldn't be working on the project in the first place.

And no, the same isn't necessarily true of unit or functional tests - I've seen hundreds of unit tests that assert things about objects and properties which are tangentially related to the end user and come with zero context attached and I have to try and figure out wtf the test writer meant by "assert xyz_obj.transitional is None". With a user facing snapshot it's obvious.


No regular developer will carefully review 50 changed snapshots. They'll stop doing a proper job of it after the third or fourth that looks like the same trivial unimportant change and miss the bug found by snapshot 37.

I do agree that a lot of people write bad tests meaning the test name does not properly describe what the test is supposed to be about so I can check the test implementation and assertions against intent. They also like you say assert on superfluous things.

The problem with snapshots is that it's doing the exact same thing. It asserts on lots of completely unimportant stuff. Unlike proper unit tests however I can't make it better. In a unit test I can make an effort to educate my peers and personally do a good job of only asserting relevant things and writing individual tests where the test name explains why "transitional has to be None".

Snapshots are a blunt no-effort tool for the lazy dev that then later requires constant vigilence and overcoming things humans are bad at (like carefully checking 50 snapshots) by many different humans VS a unit test that I can make good and easy to comprehend and check by putting in effort once when I write it. A good one will also be easy to adjust when needed if it comes time to actually need to change an assertion.


>No regular developer will carefully review 50 changed snapshots.

If you follow a habit of making small, incremental changes in your pull requests (which you should anyway), those 50 snapshots will generally all change in the same way. A glance across all of them to see that (for example) a box moved to the left in all of them.

>The problem with snapshots is that it's doing the exact same thing. It asserts on lots of completely unimportant stuff.

A problem more than made up for by the fact that rewriting it, eyeballing it and approving it is very quick.

>Unlike proper unit tests however I can't make it better

If the "then" part of a unit test "expects" a particular error message or json snippet, for instance, it's a giant waste of time to craft the expected message yourself if the test can simply snapshot the text from the code and you can eyeball it to see if the diff looks correct.

I have written thousands of unit tests like this and saved a ton of time doing so. I've also worked with devs who did it your way (e.g. craft a whole or part of the json output and put it in the assert) and the only difference was that they took longer.

>and personally do a good job of only asserting relevant things

If, for example, you're testing a dashboard and the spec is a hand scribbled design that shows roughly what it looks like, all the other ways to assert the relevant things are vastly more expensive and impractical to implement.

In practice most devs (and you too i expect) will simply leave the design untested and design regressions by, say, some gnarly css issue would be undetected except by manual test.

>Snapshots are a blunt no-effort tool for the lazy dev

95% of devs don't use snapshot tests because they're super sensitive to flakiness and because ruthlessly eliminating flakiness from code and tests requires engineering discipline and ability most devs don't have.

For those who can, it massively accelerates test writing.


Regrettably we still have some snapshot tests in our code base, yes. I cringe every time one goes red and I'm supposed to check them. Like you say, I eyeball them and after the fourth that's the same pattern I give up and just regenerate them. Meaning they might as well not be there coz they won't catch any actual bugs for me.

We try to replace them every time we come across one that needed adjusting actually. Quick is bad here. And yes they're flaky as hell if you use them for everything. Even a tiny change to just introduce a new element that's supposed to be there can change unrelated parts of snapshots because of generated names in many places.

Asserting on the important parts of some json output is not generally more expensive at all. You let the code run to generate the equivalent of a snapshot and then paste it in the assertion(s) and adjust as necessary. Yes it takes more time than a snapshot. But optimizing for time at that end is the wrong optimization. You're optimizing one devs time expenditure while making both the reviewers' and later devs' and reviewers' time expenditure larger (if they want to do a proper job instead of eyeballing and then YOLOing it).

As I see it, devs using snapshots are the opposite of a 10x dev. It's being a 0.1x dev. Thanks but no thanks.


>We try to replace them every time we come across one that needed adjusting actually. Quick is bad here. And yes they're flaky as hell if you use them for everything. Even a tiny change to just introduce a new element that's supposed to be there can change unrelated parts of snapshots because of generated names in many places.

If you can't keep the flakiness under control then yeah, they'll be worse than useless because they will fail for no discernible reason at all.


Oh the reasons are discernable. I call it flaky when you make an unrelated change and the snapshots change. You go check why and all you can do is a facepalm. What you and I call "unrelated" may be different. Such as when I make a CSS change that simply affects some generated class names for example and a bunch of snapshots fail. This will be worse in code bases with lots of reusable CSS of course, i.e. your blast radius for flakiness will be much larger the more CSS reuse you have and the more snapshot tests you have. Ours is very controllable but only because we're doing the right things (such as reducing snapshot use).

That's when you start cursory looks at the first few changes and then just regenerate them, which means they will never find any actual bugs coz you ignore them.

It's "the boy who cried wolf" basically.


> scanning for problems and clicking an "approve button", that isn't a lot of work though.

But you're actually mentally listing requirements for each one of those snapshots you check, which hopefully is the same as the previous person who run it, but who knows?

> Yes they will because the initial conditions also had a snapshot attached. If your snapshot testing is even mildly fancy it will come with a diff too.

Maybe I didn't explain properly. Say I create a component, and use snapshot testing to verify that "This is how it should look". Now next person changes something that makes that snapshot "old", and the person needs to look at the diff and new component, and say "Yeah, this is now how it should look". But there is a lot of things that are implicitly correct in that situation, instead of explicitly correct. How can we be sure the next person is mentally checking the same requirements as I did?

> If you made a change and you can see the previous snapshot, current snapshot and a diff and you never know if the change was ok then you probably shouldn't be working on the project in the first place.

It seems to work fine for very small and obvious things, but for people make changes that affect a larger part of the codebase (which happens from time to time if you're multiple people working on a big codebase), it's hard to needing to implicitly understand what's correct everywhere. That's why unit/functional tests are so helpful, they're telling us what results we should expect, explicitly.

> I've seen hundreds of unit tests that [...] With a user facing snapshot it's obvious.

I agree that people generally don't treat test code with as much thought as other "production" code, which is a shame I suppose. I guess we need to compare "well done snapshot testing" with "well done unit/functional testing" for it to be a fair comparison.

For that last part, I guess we're just gonna have to agree to disagree, most snapshot test cases I come across aren't obvious at all.


>Maybe I didn't explain properly. Say I create a component, and use snapshot testing to verify that "This is how it should look". Now next person changes something that makes that snapshot "old", and the person needs to look at the diff and new component, and say "Yeah, this is now how it should look". But there is a lot of things that are implicitly correct in that situation, instead of explicitly correct. How can we be sure the next person is mentally checking the same requirements as I did?

This is a problem that applies equally to code and all other tests to an even greater extent. For them it is a problem dealt with with code reviews.

The great thing about snapshots which doesn't apply to code and functional tests is that they can be reviewed by UX and the PM as well. In this respect they are actually more valuable - PM and UX can spot issues in the PR. The PM doesn't spot when you made a mistake interpreting the requirements that is only visible in the code of the functional test or when it's something the functional test didn't check.

>It seems to work fine for very small and obvious things, but for people make changes that affect a larger part of the codebase (which happens from time to time if you're multiple people working on a big codebase), it's hard to needing to implicitly understand what's correct everywhere

It should not be hard to ascertain if what you see is what should be expected. E.g. if text disappears from a window where there was text before and you only made a styling change then that's obviously a bug.

>I guess we need to compare "well done snapshot testing" with "well done unit/functional testing"

Snapshots are not a replacement for functional tests they are a part of good functional tests. You can write a functional test that logs in and cheaply checks some arbitrary quality of a dashboard (e.g. the div it goes in exists) or you can write a functional test that cheaply snapshots the dashboard.

The latter functional test can give confidence that nothing broke when you refactor the code underneath it and the snapshot hasn't changed. The former can give you confidence that there is still a div present where the dashboard was before. This is significantly less useful.


Are you really going to be reviewing all those 50 snapshots carefully?

The bound of testing on the "other side" is to test just enough not to increase the maintenance burden too much.


In the talk, he also talks about passing a flag which would actually update the snapshots/golden files if needed


Testing is a skill. The more you do it, the less expensive it becomes.


The main cost isn't writing the tests themselves but the increased overall system complexity. And that never goes down.


> but the increased overall system complexity

I think this happens because people don't treat the testing code as "production code" but something else. You can have senior engineers spending days on building the perfect architecture/design, but when it comes to testing, they behave like a junior and just writes whatever comes to mind first, and never refactor things like they would "production code", so it grows and grows and grows.

If people could spend some brain-power on how to structure things and what to test, you'd see the cost of the overall complexity go way down.


It's "skill issue" or "git gud".


Here's what I never got about monorepos:

Imagine you have an internal library and also two consumers of that library in the repo. But then you make breaking changes to the library but you only have time to update one of the consumers. Now how can the other consumer still use the old version of that library?


The whole point of a monorepo is to force you to update all of the consumers, and to realize that breaking changes are expensive.

The two monorepo ways to do this:

1. Use automated refactoring tools that now work because it's one repo

2. Add the new behavior, migrate incrementally, then remove the old behavior


Both of those work in polyrepo. You need a tools team to make it happen though, just like monorepo needs a tool team. The tools needed are different but you still need them.


With enough tooling, a monorepo or a polyrepo environment look exactly the same. Those articles are "Look. This is a good way to organize your code", not something that tells you one of those is better than the other.


Most monorepos imply that all first-party code only available at one version. Polyrepos usually allow first-party code to depend on old versions of other first-party code.


In a polyrepo it is more common that the update simply happens, now repo A depends on v1 and repo B depends on v2, then a year has passed and repo A doesn't even remember they still depend on an old insecure library.


That is a downside of a polyreop that you will need to figure out how to mitigate.

It doesn't matter if you go monorepo or polyrepo you wil have issues as your project grows. You will need to mitigate those issues somehow.


In a polyrepo it is common to say I depend on this specific git SHA of that other repo. In a monorepo it is weird and unheard of to say I depend on this specific SHA of the current repo. It's a matter of defaults.


In a polyrepo you need to figure out how/when to update those SHAs. This is one of the hard things about polyrepos. Monorepo of course doesn't need that concept because you cannot depend on some previous state of the repo.


> force you to update all of the consumers, and to realize that breaking changes are expensive.

...and the article points out correctly that it's a lie anyway, but at least you can find all the consumers easily.


The article is not correct on that point. At Google we would create release branches to fix the monorepo at a predictable point for testing, and only cherry-pick what we need during the release process.

I'm sure others do similarly, because there is no way you would allow arbitrary changes to creep in the middle of a multi-service rollout.


The multi-service staggered rollout is the reason the article is correct unless you are tolerating contract mismatches somehow other than in the code. not at google so won't be guessing.


Thanks for the info!

Seems like a big restriction to me.


That's the neat part. They don't. Either the broken consumer updates their use, you update it for them to get your change shipped, or you add some backwards compatibility approach so your breaking changes aren't breaking.


Thanks for the info!

Seems like a big restriction to me.


it is the only sane thing to do. Allowing everyone to use their own fork means when a major bug is found you have to fix thousands of forks. If the bug is a security zero day you don't have time.


Couldn't you just leave the other consumer at the old release (presumably well tested, stable)?

I don't see how being forced to upgrade all consumers is a good thing.


I don't see how being forced to upgrade all consumers is a good thing.

It forces implementers of broad or disruptive API or technical changes to be responsible for the full consequences of those decisions, rather than the consumers of those changes who likely don't have context. People make better choices when they have to deal with the consequences themselves.

It also forces the consequences to be incurred now as opposed to 6 months later when a consumer of the old library tries to upgrade, realizes they can't easily, but they need a capability only in the new version, and the guy who made the API change has left the company for a pay raise elsewhere.

As a plus, these properties enable gigantic changes and migrations with confidence knowing there aren't any code or infrastructure bits you missed, and you're not leaving timebombs for a different project's secret repo that wasn't included in the gigantic migration.

Bluntly, if you can't see why many people like it (even if you disagree), you probably haven't worked in an environment or technical context where mono vs poly truly matters.


Mapbe but each one now is another thing that you need to fix if a major issue is found. If it is only a few releases not a problem but it can get to hundreds and that becomes hard. Particularly if the fix can be cherry-picked cleanly to other branches.


You don't make breaking changes. You provide the new API and the old API at the same time, and absorb the additional complexity as the library owner. Best case scenario everyone migrates to the new API and eventually remove the old one. This sounds onerous, but keep in mind at a certain scale there is no one commit in production at any given time. You could never roll out an atomic breaking change anyway, so going through this process is a reflection of the actual complexity involved.


Thank you for the response!

Genuine question: if you can't have one commit in production at any given time, what advantages for the monorepo remain?


> if you can't have one commit in production at any given time

That might be possible in a simple library + 1 consumer scenario if you follow the other commentors' recommendation to always update library + consumer at once. But in many cases you can't, anyway, because you're deploying several artifacts or services from your monorepo, not just one. So while "1 commit in production at any given time" is certainly neat, it wouldn't strike me as the primary goal of a monorepo. See also this discussion about atomicity of changes further up: https://news.ycombinator.com/item?id=44119585

> what advantages for the monorepo remain?

Many, in my opinion. Discoverability, being able to track & version all inter-project dependencies in git, homogeneity of tooling, …

See also my other comment further up on common pains associated with polyrepos, pains that you typically don't experience in monorepos: https://news.ycombinator.com/item?id=44121696

Of course, nothing's free. Monorepos have their costs, too. But as I mention in the above comment and also in https://news.ycombinator.com/item?id=44121851, a polyrepo just distributes those costs and makes them less visible.


Thank you very much for the info!


There's a second option, not mentioned by the sibling comments so far: Publish the library somewhere (e.g. an internal Nexus) and then have the other consumer pin the old version of the library instead of referring to the newest version inside the monorepo. Whether or not this is acceptable is largely a question of ownership.


Thanks for your response! Don't you lose the main advantage of the monorepo then, since you can no longer rely on the fact that all the code at any one commit in history fits together? Or are there other significant advantages that still remain?


See my response to your other comment in a sibling thread: https://news.ycombinator.com/item?id=44120854


Not OP, but this is Noel Berry, one of the creators of Celeste, a very successful and incredible indie game.


Didn't know that ... silly me. In any case, I think the question still relevant - is the secret just "if you build it, they will come?"

I suspect it isn't since the competition is fierce, there has to be something beyond making a good game


I don’t know Berry’s game dev history, but Celeste designer Maddy Thorson has been making fantastic 2D platformers since, essentially, the very beginning of Western indie games. (Jumper 1 was released in 2004!)

In other words, I think Celeste required decades of preliminary work and industry presence to end up as good as it did. If you build it for a long-ass time, maybe they will eventually come!


The secret extra sauce is called marketing :)


I've long been searching for a concise example of "good" inheritance, can you recommend one?


Fantastic post, very well written.


I don't understand this position. Why do people want private companies to decide what's allowed and what isn't? Shouldn't lawmakers, and by extension the people (at least in democratic countries) decide what speech is allowed and what isn't?

The number of people using social media makes it the town square of the present. We should treat it as such.


Because it works, and is a requirement to have a functional platform at all.

First, commercial speech is certainly speech. If you don't restrict anything at all, your platform will drown in a constant deluge of spam. So on that basis alone there's something you must remove if you want to have any kind of conversations happening, and therefore can't be an actual absolutist.

Second, there's illegal and just icky content. Posting pictures of poo isn't illegal. It can be "speech" after a fashion. That will also quickly result in people leaving.

Third, a free-for-all is only tolerable to a small segment of the population. HN for instance only works because it's moderated and curated. You can have something like r/worldpolitics which embodies the "no moderation" ideal. It's a subreddit where the moderators do the absolute minimum Reddit requires. Meaning it's mostly porn. And what's the point in having more than one of those? They're all more or less the same.


The problem is the algorithms that surface people's posts to those not friends with or following them. That's what turns a basic and true "social media" into a hyper-competitive arena which encourages a race to the bottom, to make the cheapest content possible to grab eyeballs.

"Social media" should not be a hyper-concentrated collection of advertising targets. That single aspect has probably caused 99% of "social medial ills.


Quippy, but off the cuff: - I don’t go to my present town square(s) socially because it is full of a-social behavior. Same reason to avoid certain bars or clubs, prefer certain parks, or why some are wary of public transit.

- I don’t feel a right to decide the vibe of how a business curates its space. My bakery, coffee shop, local library, etc. all curate a space with an opinion. I don’t feel I have standing to assert that my preferences should dominate their choices.

As an aside, businesses are also an extension of the people, the best ones tend to just not be mode collapsed


In case you haven't noticed, the people running the government now shouldn't be trusted to run a McDonald's franchise, much less decide what speech is allowed and what isn't. Ditto for the voters who elected said government.


> The number of people using social media makes it the town square of the present. We should treat it as such.

Then try yelling racial slurs in your local town square and see how that goes for ya.

> I don't understand this position. Why do people want private companies to decide what's allowed and what isn't? Shouldn't lawmakers, and by extension the people (at least in democratic countries) decide what speech is allowed and what isn't?

Why shouldn't I be allowed to police what can be said on my private property ? If you are in my home yelling racist stuff, I will ask you to leave, and make you leave if you refuse. I don't see why I shouldn't be able to do the same on my plateform. If you want to yell racial slurs, you can go to truth social or whatever.


In the case where the platform is a monopoly then yes, the government should play a role. Small forums should be left alone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: