Hacker News new | past | comments | ask | show | jobs | submit login
How to Survive Your Project's First 100k Lines (verdagon.dev)
214 points by swah on May 4, 2023 | hide | past | favorite | 81 comments



Regarding assertions:

The sections "If you think you're using enough assertions, you're wrong" and "Bonus: Sanity Checking" can be considered a subset of "Design by Contract"[0].

Regarding:

  However, comments can become out-of-date, and you might
  not be lucky enough to stumble across the right comment
  before you embark on a refactoring adventure.
Not if comments are treated as first-class citizens, just as important as the code change(s) which make them out-dated. Better still is to ensure unit tests are "executable documentation"[1].

Regarding "Prefer end-to-end tests":

In my experience, this leads to unit test atrophy, degrading system verification to rely solely on integration tests ("end-to-end tests"). This has often resulted in higher coupling and always resulted in interminable build times.

0 - https://en.wikipedia.org/wiki/Design_by_contract

1 - https://softwareengineering.stackexchange.com/questions/1546...


End to end tests by definition are more loosely coupled.

They are better as executable documentation - mapping to actual user flows and able to generate docs (videos, screenshots, etc.).

They're slower, but it's easy enough to parallelize and cache.

Unit tests will often just mirror the behavior of the code. This is not useful.


I think it depends a lot on the nature of the code.

I prefer to use unit tests to probe tricky parts of the code. It's basically like a recyclable debugging session. Sometimes with an actual debugger attached, but otherwise just an assertion is fine. If I've had reason to investigate the code once there's probably a reason to keep that inspection hatch because bugs rarely come alone.

Unit tests for the sake of unit tests is kinda stupid though and can actually border on a sort of technical debt. If no matter what you change, you need to fix/disable a bunch of unit tests, failing tests becomes expected and it becomes very easy to fall into just fixing the assertions or disabling the test when that happens without actually thinking about why it fails. Not even mentioning how much this sort of code base will bog down the rate of change.

That said, it's a tricky balance. You can definitely have both too many and too few unit tests.


It does depend on the nature of the code. My rule of thumb is integration code -> integration tests work better. Algorithmic/calculation code -> unit tests work better.

A debugger attached to an end to end test probably better if you're testing integration code. With a unit test you might have to do a lot more extra work to, e.g. construct an exact replica of a request object or a database object or something else that comes "for free" when you precisely mimic the software being used in the real world.

That said, if mimicking the real world in your code is as simple as 'assert my_function("some input")' and checking the output then yea, unit tests are way quicker.


> it's easy enough to parallelize and cache

I beg to differ - that’s a whole new layer of bugs and race conditions. And I’m extremely sceptical you can speed them up enough that I can run them every couple of seconds on every micro-change.

It is very hard (perhaps impossible) to write good unit tests around code that wasn’t designed to be unit tested. This is why the advice is to write the test first, even if you then throw away half the tests.

In my tests, I’m looking for something that tells me where the problem is quickly. End-to-end tests can’t do that - they will highlight that a happy path is broken and thus the build is broken. They usually fail to cover many unhappy paths, and they usually highlight a symptom but not the root cause.


> Unit tests will often just mirror the behavior of the code. This is not useful.

Can you go into that more? I definitely agree that end-to-end tests give some really strong benefits and verification, but I'm a bit lost on the "mirror the behavior of the code" comment. I like that if I write a function and then have associated unit tests for all of the case I'm considering (which probably won't hit all at first), I will have confidence that that code functions as intended and if something changes that makes that code function differently (when not intended), we'll know right away by test suite.

End to end tests give you more "real world" test, but it probably won't give you a good granular view of the issue. I don't see what the issue is with unit tests and your "mirror the behavior" comment, but I'm interested if you have the chance to clarify.


They're called mimetic tests: https://hitchdev.com/tropes/mimetic-tests/

For self contained algorithmic or calculation code with simple inputs and outputs, TDD with unit tests really shines. If your app is mostly lower level code that does a lot of complex "thinking" especially with simple inputs and outputs then you probably should have mostly unit tests.

Integration/E2E testing algorithmic/calculation on that kind of code leads to tests that could take 100x longer to run - unnecessarily.

However, it's very common to have very little code of this type. If an app is very integration code heavy (it sends/receives messages, does CRUD, calls APIs, sends emails, etc.) then the opposite is true - you only want to have integration / end to end tests.

Unit testing that kind of code leads inevitably to mimetic tests.

End to end tests can make it hard to get a good granular view of an issue. I agree with this. However, this can be solved by integrating high quality debugging tools to the end to end tests - automatically grabbing screenshots, logs, external API calls, etc.


> Unit tests will often just mirror the behavior of the code. This is not useful.

Then they are characterisation tests, not unit tests. Which may or may not act at the unit level.

As for whether characterization tests are useful; they're extremely useful for, well, characterising existing code behaviour.

But they suck at detailing pre-code specification; that's what unit tests are for (at least at the unit level).

Now I'm not naive enough to believe most people adhere to this distinction appropriately. It's true that people sometimes treat X as Y in practice. But that's one thing, and a totally different thing to complain about X because people do Y instead.


Agree a lot. Just want to say, since the article also touched on this, that parallelization of test execution can again introduce nondeterministic failures...


> End to end tests by definition are more loosely coupled.

Can you explain how?


They make no assumption as to the implementation of the thing, they test that the observable behaviour is correct and that’s it. You could refactor the whole codebase without touching your tests: the tests are decoupled from the runtime code


I view it differently. I view all my code through end to end tests and use very few mocks. If I need a mocks/fakes for things that aren’t external to the code (eg I/O to network services), it usually means I haven’t designed my code modular enough and refactor. The reason I view it that way is that all fakes (of which mocks are a subset) means you’re reimplementing your real code with a reimplementation specific to your test suite which means your test suite isn’t providing coverage and now you have two codebase to maintain hidden in one which grows very very difficult and time consuming to do (velocity of any complex refactor massively slows down). The tests themselves I view as a layered thing where each layer of the program has matching tests. So as I add a primitive, I’ll add tests to that piece + how it integrates into the next layer (if there’s observable changes). Sometimes primitives will cross-cut all the way into the end API/UI. In those cases there’s potentially new tests I have to add at every layer to make sure it’s stable all the way down.

This approach has a strong advantage aside from maintaining only one codebase and your tests being much easier to refactor in coordination with code changing along the way in smaller ways. It makes it easier to find and figure out bugs because if it’s not visible at a layer it means either you’re missing tests immediately below that layer OR your integration in that layer is broken. As you move up the layers, it’s often more complicated to debug issues at much lower levels. So if a very high level test fails, you can do a search through lower levels writing reduced test cases that reproduce your observation. That can be a good compliment in addition to reasoning about behavior from reasoning about what the failure might be from symptoms (which often can be faster if you understand your codebase but trickier problems that you can’t figure out and are stuck on can be tackled through a linear search or maybe even bisected search if your clever).

The problem with most tests is that they tend to capture “happy paths” you’ve thought about. Even though negative tests (which are hugely important to make sure to add about how you expect error scenarios / failures to behave) are hugely important, they have a similar characteristic. I like to use property check tests and fuzzing techniques more broadly to improve my coverage when I can (can get expensive in the test suite so it can sometimes be worth balancing testing it on every commit vs post-merge in the background).

Still. It’s a problem. Mutation testing feels like another important tool. I’ve not had success getting Stryker (mutation testing) running on any TypeScript project I maintain which is frustrating (I got it running kind of but it seems to struggle/I haven’t figured out how to generate a proper report because I’m using the workerd version of Miniflare where the code runs out-of-process). But mutation testing as a concept seems like a fantastic way to figure out code coverage and where you might have gaps that traditional testing / fuzzing (property checking is a subset) isn’t catching. We all have blindspots and mutation testing is a far more defensible position that your test coverage is good vs traditional techniques we’re taught in school like branch and line coverage (which in my opinion are extremely poor and misleading ways to measure code coverage).


A comment with first class citizen status is an assert?



Some languages (e.g. D), have contract blocks built into the language, which are even better. (or, well, different).


I've never seen an assert explain why


The signature for static_assert(3) in C has a second argument that is `char *msg`. With C23 onwards, it becomes optional.


Java has this too.

  assert isSorted(array, start, end) : "The input array for binary search is expected to be sorted";
That said, I think in many cases asserts tend to be fairly self-evident, they typically just list the invariants, e.g.

  assert start <= end;
  assert (end - start) % stepSize == 0;
  // some logic
  assert pos - start >= 0;
  assert pos - end < 0;
  assert (pos-start) % stepSize == 0;
  return pos;


I use it in python all the time:

    assert x.shape == y.shape, f"{x.shape} != {y.shape}"


This should be

  # Confirm that x.shape == y.shape
  assert x.shape == y.shape, f"{x.shape} != {y.shape}"  ## x.shape was not y.shape
/s


> Use a language with good compile speeds.

This is an interesting suggestion to me. Honestly, I consider a lot of other things like type safety and ergonomics of the language before I think about compile speed if at all.

However, that's not to say it's irrelevant. Perhaps I should be thinking about it more. I did notice that they didn't suggest any particular language with this comment. Compile speed seems like a difficult thing to compare between languages in general, since it would be difficult to make a genuine apples to apples comparison.


I think a fast compile is essential.

I've worked on projects with slow and fast compiles, and man, it's a completely different world. The way you think about problems and approach them, the way you approach adding new code, it's just completely different.

Slow compile times makes the entire process of adding, changing, or debugging the code slower. It makes it painful, frustrating, and more confusing. It's nonlinear. Doubling the time to compile a project can quadruple the time to solve problems, in my experience.


> Doubling the time to compile a project can quadruple the time to solve problems, in my experience.

My experience has been somewhat opposite.

Long compile times lead you to put more thought into your program, roughly (obviously I'm way over generalizing here, it's actually the language itself that causes this but I'm waving my hands and positing that there's a rough correlation between compile speeds and language formality). Slow compile times mean you're not thinking about how your problem is modeled formally and just kinda gluing things together and seeing what happens at runtime.

While quick compile times may get you a system that kinda works more quickly than a language with slow compile times, in my experience you have to iterate much much before the problem is truly solved. When I user slower languages I find myself iterating fewer times before the problem is usefully solved. Not to mention I spend far less time fixing bugs.

Maybe it's all a wash, but slow languages definitely do not result in a quadratic slowdown in productivity...


I dunno. I don't think this should be down-voted into the ground. But I think it's more nuanced.

Quick compile times may in some circumstances enable a sort of mentality of trial-and-error that I think is relatively common, but I don't think is a good approach to software development.

On the other hand, I don't think slow compile times are necessary to prevent this. Slow builds arguably more often lead to frustration and context switching. That isn't good either.

I think the best option is to have fast builds but also have enough discipline to not write code by trial and error.


The trade off here assumes a “barely working prototype” is undesirable and only the bug free end result is the goal.

Most projects I’ve worked on though, especially in their first few 100k lines of code didn’t really know exactly what problem are they trying to solve. Having something that kinda works in front if stakeholders / customers has been invaluable in actually getting the feedback and not wasting dev cycles on features nobody actually wanted.

Though granted “going back and doing things properly” has been as hard to achieve as “have complete requirements upfront”, so what do I know :-D


We should go back to punch cards, then. We'll obviously have much higher quality software.


You joke, but bugs in your punchcard program were a royal PITA, so people spent more time making sure they were correct before going through the work of punching them out. I've heard this from a first hand source.


Quality is not lack of bugs. That's a component, maybe.


Yep, but they still had bugs, and often the minimum turn-around time was 24 hours. So it does not sound like a net positive


> I consider a lot of other things like type safety and ergonomics of the language before I think about compile speed if at all

I believe the good compile speed is hinting at having shorter feedback loops in your workflow, and how it can significantly boost your productivity

It's something you'll do often and can have hidden costs. Especially with cases when it takes half an hour to build a software.


The number of engineers I've met that don't know how to run a single test or single test file and wait minutes for 1000's of tests to run is mind-boggling to me. Like, how do you get anything done?


productivity, or perceived productivity?


productivity.


I believe it's mostly just a direct jab at C++. Almost every other language has fast or fast enough compilation speeds.


Scala can be very slow.

I’ve also worked on Java / Spring Boot projects that seem to take 10s of seconds to compile, and even longer to start up.

And TypeScript too.

I suspect the latter two have simple options which could improve things significantly, but I haven’t dug in.


Java is very much what you make of it. Spring Boot tends to pull in everything and the kitchen sink, and the atrocious build and start times are a fairly logical consequence.

Vanilla Java can be very quick to start and build, especially if you do incremental builds.

In my search engine project which is ~50kloc I have incremental builds that can be as fast as 1-2 seconds including running all pertinent tests[1], and the start time for the stuff that doesn't like populate an 12 Gb hashmap in memory upon start-up is also single digit seconds.

[1] A clean rebuild is 37 seconds; 108 seconds with a full suite of tests. That's fairly tolerable IMO. I've horror stories of Java builds that have had compile times anywhere between 15 and 45 minutes.


Rust sends its regards.


Everyone says this but I've yet for it to ever actually be a problem.


That's because it isn't unless you have a very exceptional case or maybe you make a mega-crate. I was rocking a severely outdated processor and it was slightly annoying, but not that bad at all. And once the dependencies are built it was a non-issue. With a more modern CPU the times are miniscule.


Right that's been my experience too. The initial build of dependencies takes a bit but that's never a problem for me personally.


So does kotlin


swift too...


Javascript too


Any language that uses an LLVM backend is dog slow.


C++ actually has pretty competitive compilation speeds, barring metaprogramming, up to a point.


The Typescript/Angular project I dayjob on sometimes takes minutes to compile, depending on whatever optimization settings the last Angular update has changed/messed up. (The compilation also frequently spins out of control until it OOMs.)


Hah. We have shared build boxes at work, which are EC2s that are too small to share anyway, and if two Angular builds run on the same box at once then its guaranteed that the machine is going to die. It's very interesting how things start to fail strangely when the machine is out of RAM.


There’s also modularization to compartmentalize recompilation, reducing the overall time to rebuild for any given change. CI can cache compiled modules.

This then kicks the can down the road to the linker, and there’s been some work in this area as well with projects like mold.


The fact is the complexity in most valuable systems is interacting with other external systems.

Your system can be as type safe as it wants. It will never be type safe when tied to the systems of other companies, no matter what they say, or how they say their apis are supposed to work (assuming you have documented apis at all).

Being able to iterate quickly and test those connections is far more valuable to most businesses. At a certain scale, it probably flips, but at that point you're a huge company.


I'd say, use a toolchain that gives you rapid iteration to the point that you try to keep up with it, rather than the other way around. For instance a hot reloading workflow has been common for web apps for some years, but in native mobile we've only recently gained the capability via the preview canvas for SwiftUI. That makes a huge difference even though Swift is a big step down in compile speed from its predecessor.


Yeah I agree. I think what matters more is IDE speed (which includes syntax and partial compile to some degree). Then comes incremental compile which should be fast or at least exist. and then comes compiletimes in general.

Also: compiletimes are not a property of just languages. It also depends how you use them. And languages with few guards and guarantees (like Go) of course compile faster - but you pay the price later.


I often spend hours of writing code without ever compiling - once you are familiar with the language and how the compiler behaves, that should be the norm.

When I then compile, its okay if it takes 10 seconds or 10 minutes.


It might be that you didn’t encounter this as a problem before.

In its earlier days, changing a single character in a typescript codebase implied a trip to the kitchen.

Now TS compilation is decently fast but intellisense became the bottleneck.


You're all using languages that need to be compiled?


This is one of the better articles (of this type of content) that I've read in a while.

Pushing implicit logic towards types, keeping non-determinism/impurities at the edges of your program by asserting around edges is a massive win.

The only language that encourage this type of programming (as in it's idiomatic across the ecosystem) whilst still keeping compile times down is OCaml.


Another tip that helped my compiler. A compiler usually has passes which are optional (optimization) or things that can be done one or another way. Like a pass that can come befor or after another pass. Inputs that can be processed from left to right or right to left etc.. It is quick and easy to make flags for all these variants. Than run the integration tests with different flags.

Another tip, validation pass. If you have a (complex) data structure like an IR. Create a validation pass, which becomes the specification, of what exactly is legal in this data structure. This is like an assert on steroids.

And.. have a bigger use case at hand. Integration tests are great, but also have a real world case that can be run.


I like the idea of writing a validation pass, but I don’t want to pay a cost at runtime. This could be much to ask, but are there any programming languages which allow to write a validation pass which are enforced at compile time?


Move the invariants on the IR into the type system representing the IR. Then no validation pass involved. Requires an adequate type system.


Good advice in here. The focus on assertion-oriented programming is interesting. Erlang doesn't have assertions available in application code directly, but I think structure-enforced assignments are really similar in practice.


Yes! The reasoning behind 'let it crash' is just this; assume happy path destructuring (effectively asserts) and let the supervisor(s) decide what happens to a process that crashes.

Now you have very explicit expectations and contracts on data, yet the code is very straight forward and easy to reason about. Layering OTP on top means you can now decide how you react to one-off errors and when you're drowning in bad data.

I love it.


About the first advice, if we’re talking about Rust, it might be a good idea to turn a number of assertions into debug_assert, so that it does not affect performance during prod. Of course it’s almost always not the case that you would want to not fail in prod, but sometimes you KNOW that an assert will HAVE TO be triggered in tests.

Another way to think about this is that there’s three kind of errors:

- recoverable (return Err)

- unrecoverable (debug assert)

- unrecoverable and tricky (assert)


Assertions are often free or nearly so. On modern out of order processors, it’s a fully predictable branch and a cheap assertion that nothing depends on. So the pipeline is unaffected and there’s usually a cheap test that often is completely free because there’s spare capacity.

If the assert tests something expensive, I will usually switch to a debug assert.


I'm close to reaching 100k on current greenfield project I'm working on. We have the right tools in place, fast build times, CI/CD, conventions, documentation, code reviews. Most challenging thing I've found is to keep my teammates on the right path (we are 2 seniors and 1 mid level dev). Constant ignoring of the rules and going over same code review comments drives me nuts.


> Constant ignoring of the rules

I don't know your situation, but I'd be interested to know if these are agreed upon rules that people just get lazy about (really common, I've been there) or what. If it's forgetfulness or it's tedious to follow, I'd consider looking into setting up tooling to enforce the rules a bit better. Just throwing things out there, I have no idea what the situation looks like.


> If it's forgetfulness or it's tedious to follow, I'd consider looking into setting up tooling to enforce the rules a bit better.

It might also make sense to question "The Rules" from time to time. Sometimes, you have rules in place that are now grown old, and are really tedious to follow. If it becomes a chore to adhere to the rules in a project, it's fair to question the rules and drill down to the issue. Doesn't work for every situation, though - just an idea.


Perhaps you can use linting to force some (or all) of the rules instead of having the same discussion in code reviews.


There is some very good advice in here for when you reach medium size. I found myself nodding along, saying of course, happy to see someone write all this up so well, good reference...

Until the fast turn around time item came around, and I knew exactly why my current work was bothering me. I gues sometimes you need to be whacked around the ears with a checklist of stuff you should know, before you realize something's off.


I'd love to do this in Python, but unfortunately we can't.

assert is supposed to be cheap in production because you remove them there.

In python, this is done python passing the "-oo" flag to the interpreted.

Unfortunately, the community never got the memo:

- if you put it in a lib, nobody is going to use "-oo" in prod, and they will think your lib is very slow

- if you using it in prod, some of your 3rd party deps may very well be using "assert" to make required production sanity checks. So using "-oo" would suddenly break your system.


> I'd love to do this in Python, but unfortunately we can't.

> assert is supposed to be cheap in production because you remove them there.

well... Python is so slow that are you going to notice the overhead of assert calls?

(Not being negative, I really like Python)


Of course.

Python is a great glue language, so you will do most of the expensive call outside of Python. However, things you typically put in assert are usually pure Python you call repeatedly, putting you in the worse of both worlds.


I'd add one more tip that I think far more software developers should do: Add unit-level fuzz testing throughout your projects. Fuzzy bois are like assertions on steroids.

With large projects you often get modules which have an API boundary, complex internals and clear rules for what correct / incorrect look like. For example, data structures or some complex algorithms. (A-star, or whatever).

Every time I have a system like this, I'm now in the habit of writing 3 pieces of code:

1. A function that checks the internal invariants are true. Eg, in a Vec, the allocated length should be >= the current length. In a sorted tree, if you iterate through the items, they're sorted. And children are always >= the internal nodes (or whatever the rules are for your tree). During development, I wrap my state mutators in check() calls. This means I know instantly if one of my mutating functions has broken something. (This is a godsend for debugging.)

2. A function which randomly exercises the code, in a loop. Eg, if you're writing a hash table, write a function which creates a hash table and randomly inserts and deletes items in a loop for awhile. If you've implemented a search algorithm, generate random data and run searches on it. Most complex algorithms and data structures have simple ways to tell if the return value of a query is correct. So check everything. For example, a sorted tree should contain the same items in the same order as a sorted list. Its just faster. So if you're writing a sorted tree, have your randomizer also maintain a sorted list and then periodically check that the sorted list contains the same items in the same order as your tree. If you're writing A-star, check that an inefficient flood fill search returns the same result. Your randomizer should always be explicitly seeded so when it finds problems you can easily and deterministically reproduce them.

3. A test which calls the randomizer over and over again, and checks all the invariants are correct. When this can run overnight with optimizations enabled, your code is probably ok. There's a bunch of delicate performance balances to strike here - its easy to spend too much CPU time checking your invariants. If you do that, you won't find rare bugs because your test won't run enough times. I often end up with something like this:

    loop (ideally on all cores) {
        generate random seed
        initialize a new Foo
        for i in 0..100 {
            randomly make foo more complicated
            (at first check invariants here)
        }
        (then later move invariants here)
    }
Every piece of a large program should be tested like this. And if you can, test your whole program like this too. (Doable for most libraries, databases, compilers, etc. This is much harder for graphics engines or UI code.)

I've been doing this for years and I can't remember a single time I set something like this up and didn't find bugs. I'm constantly humbled by how effective fuzzy bois are.

This sounds complex, but code like this will usually be much smaller and easier to maintain than a thorough unit testing suite.

Here's an example from a rope (complex string) library I maintain. The library lets you insert or delete characters in a string at arbitrary locations. The randomizer loop is here[1]. I make Rope and a String, then in a loop make random changes and then call check() to make sure the contents match and all the internal invariants hold.

[1] https://github.com/josephg/jumprope-rs/blob/ae2a3f3c2bc7fc1f...

When I first ran this test, it found a handful of bugs in my code. I also ran this same code on a few rust rope libraries in cargo. About half of them fail this test.


Regarding numbers 2 and 3, I believe you are describing "property-based testing"[0]. A Scala version of this is ScalaCheck and can be found here[1].

There appears to be at least one Rust library which claims to provide same, but I am not a Rust developer so cannot recommend any for fitness of purpose.

0 - https://hypothesis.works/articles/what-is-property-based-tes...

1 - https://github.com/typelevel/scalacheck/blob/main/doc/UserGu...


Strong agree!

For JavaScript, I suggest folks check out fast-check [0] and this introduction to property-based testing that uses fast-check [1].

This is broadly useful, but one specific place I've found it helpful was to check redux reducers against generated lists of actions to find unchecked edge cases and data assumptions.

[0] https://github.com/dubzzz/fast-check [1] https://medium.com/criteo-engineering/introduction-to-proper...


Agree with the above, will add that proper fuzzing is worth it quite often also.

If your ecosystem has tooling for structured fuzzing (or it's easy for you to fuzz from a raw series of bytes), fuzzers are relatively effort to add and very powerful.


Sounds really similar to Property based testing [1]. (Rule-based testing)

[1] https://unzip.dev/0x009-property-based-testing/


Has anyone used Vale or have any thoughts on it?

(That's my main curiosity after reading the post and clicking around. It looks neat but I'm just hearing of it.)


https://vale.dev/ for anybody else who might not have heard of it


I'm happy to see more love to assertion.

I worked on a 15K loc Python project. We didn't have tests because it was hard to isolate.

We started to puts some asserts here and there on parts of the code we weren't sure about.

After few months, we simply puts assert _everywhere_ and it quickly became a weird but efficient way to cover our backs and it help us to make refactoring fun.

With assert, you can start to think if particular branching is relevant or should be placed on the caller side.

We consider an AssertionError as a developer error and it makes a real difference in investigation time between other errors that could raise.


Non-determinism is possible if your system's inputs are entirely deterministic. If your system's inputs are deterministic and your initial state is deterministic, you don't even have to worry about state management or side effects (no need for functional programming). This is possible to achieve with certain fork-resistant blockchain systems. You can also achieve determinism across multiple blockchain systems with different consensus mechanisms so long as they are both fork-resistant.


,,Dependency injection (the pattern, not the kind of framework)

Encapsulation

Polymorphism''

These were just thrown in, but they both make the code less readable and less performant if they are not really needed.

In my experience while having separation of concerns is critical for software development, the only way to find the right interface for that separation is trial and error.

Iterating on the interface between layers is the most important part of software development.


Those first two principles help promote simpler and more stable APIs, which is a must for large projects with many teams. If you iterate on the interface between large enough layers, it ends up anywhere between "very costly migration" and "breaking change".

In a smaller project though, then yeah, it's better to just refactor away one's woes. Like you said, if it's not really needed, they can have their drawbacks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: