Hacker News new | past | comments | ask | show | jobs | submit | ldlework's comments login

Unironically, yes. That's exactly how anarcho-capitalists see it. To them, Anarchy just means "no coercion/state", it doesn't mean a lack of structure, delegation or coordination.


This is quite cool.


Thanks! :)


Created this short visual narrative with Dall-E in the style of H.R. Giger.

Human figures live their lives in a surreal dystopian world, until they find a hole leading to an even stranger one.

Video was made with React Remotion which was pretty cool. If anything, check that out.


All you've done is push the burden of constructing the dependency to whomever is calling an passing that dependency to __init__

Consistently applied, all construction gets pushed to the entry-point of the program. Congratulations, you've just discovered the so-called "composition root".

Now that all construction is taking place at once, the order matters as you can't pass a dependency to its dependent until the dependency has been constructed. But it may have its own dependencies. So now there is a topological sorting problem.

Turns out computers are really good at topological sorting. So, someone made the computer do it, and we call that a dependency injection container. Tada.


>you can't pass a dependency to its dependent until the dependency has been constructed. > But it may have its own dependencies. So now there is a topological sorting problem.

I guess I just don't see the problem.

That solves itself naturally through normal programming. Some function takes A as argument? Well I obviously make that A first. A requires B to setup? Well obviously I make that first.

I don't need to "sort" anything, it just follows naturally from the types and constraints of the API.


Best part is you haven't introduced new constructs or DSLs or XML configuration. It's just plain old Python functions calling other functions.


Yes you're right, it works really really well.


Of course the order matters - but you know what? It's good that this is explicit in the sourcecode and even checked by the compiler.

Since now, I can go to the project's root and read step by step how the application is architected and build up and which are core dependencies and which are not.

With dependency injection containers, everything is shuffled around and I'm now at the mercy of a library to tell me what I want to know.


If you have so many top-level components or change your dependency graph that often that sorting becomes an issue you want to automate, you might want to reconsider your architecture.


> Consistently applied, all construction gets pushed to the entry-point of the program. Congratulations, you've just discovered the so-called "composition root".

I never knew this has an explicit name.

> Turns out computers are really good at topological sorting. So, someone made the computer do it, and we call that a dependency injection container. Tada.

What exactly is a "dependency injection container"? I've searched the internet for the definition, but I've only gotten more confused by all the PHP and C#. You mention topological sorting of dependencies - does that assume a hierarchical object model or can it be used with basically anything?

EDIT:

From what I've read, it appears that a Dependency Injection Container is just an object that:

1) for each dependency type, has a `GetDependency: Type -> Object` method

2) lazily creates dependencies as required by the dependents and the dependency dependencies.

Is this accurate?


Beautiful article by Mark Seemann on composition root [1]. Another one on when to use DI container [2].

[1] https://blog.ploeh.dk/2011/07/28/CompositionRoot/

[2] https://blog.ploeh.dk/2012/11/06/WhentouseaDIContainer/


And usually configurable, e.g. for interface Reader use FileReader implementation.


All you are doing is pushing the problem to runtime.


To any single integration test to be precise. So your builds fails few seconds later if something is wrong.


it is at heart quite simple. But this characteristic doesn't always mean that it has few benefits. Off the top of my head, things that DI enables include:

* swapping out a class for an equivalent becomes an configuration concern.

* whether a type is a singleton or transient becomes a separate concern to that class's implementation, and it can be changed in the app startup config code.

There can be drawbacks too, if done badly, but I don't think that the argument that "this doesn't do very much" is entirely relevant to the pros and cons.


While some configurability is desireable, being explicit is even more so.

I wonder why Zope died out? It has everything, it's a completely sound design, and makes all components interchangeable, and it's very pretty when done properly.

It died out because it's verbose and gets hairy quickly in the real world where not everyone is on the same page, so it never ends up being used "properly".

Go for simpler and you have a higher chance of everybody getting the intent on their first reading of code.

It's going to be hard to get people to not do DI "properly" if you simply ask them to pass dependencies in (though I am sure they still could by eg. passing classes vs instances in).


DI containers don't stop circular dependencies. It's not a massive advantage.


Yes, but modern DI (like Dagger for Java) can detect cyclic dependencies at compile time and break the build.


Curiously i've actually seen people use @Lazy in Spring projects to allow for circular dependencies, to deal with situations where the services or their dependencies aren't structured like a neat tree with leafs, but rather some interdependent graph with cycles.

Honestly, in practice it worked and wasn't too bad to work with, which was interesting to behold - in practice everyone talks about how circular dependencies are bad for a variety or reasons (trying to print or process data and ending up with endless loops comes to mind), but then there just was that system that was chugging along without a care in the world.


I know, but just thinking though what you're doing and building a program for right now, rather than building for a future that may never happen is probably going to deliver more success. I like constructor based dependency injection personally. It's simple and I don't really see a stack of value obsessing about the possibility of code reuse.


I can detect a cyclic reference before I even finish writing the code, what's the point of letting the compiler figure it out?


> I can detect a cyclic reference before I even finish writing the code, what's the point of letting the compiler figure it out?

The compiler generally has better attention to detail and the ability to deal with larger object graphs than the typical human.


Sure, but how are you going to write code that uses a class with a circular dependency?


Yes they can and do. The DI container in .NET core does this upfront and will thrown an exception if it detects circular references.


When this was first being presented at the Hackathon where it was created, I joined a few seconds late due to some camera trouble. My first thoughts where "Wow, they really went above and beyond, so tightly choreographing their presentation to their slides." Then it slowly dawned on me what was happening. Funny stuff.



I'm still having this issue.


If the code is implementing against interfaces, why would late binding mess up your IDE experience at all?


My heart sinks whenever I need to spend ten minutes trying to work out which of the three different implementors is actually going to be called on a particular code path. For about a week of December, my work was solely spelunking to track down which implementations of an interface in C# were dead code and so on.


Right. Often there are 2 types of interfaces in a project. The first are "natural" interfaces, that you have put some design into and are meant to be reusable. Things like Streams or Collections. When you write a function that uses one of these, you really are expecting to be able to use any implementation. If you want to go to the implementation of Stream.Read, obviously the IDE isn't going to be able to do it, it's abstract and there are any number of implementations.

You get the other kind of interface when you want to loosely-couple your code, and so you define interfaces for many classes. Often, there is only a single class that implements the interface in your project, though there may be mock implementations in your test code. Even though there is a single "real" implementation, the IDE can't/won't jump to that implementation in the same way it won't in the first case. This is frustrating though, because it would have worked if you hadn't extracted the interface for improved testability.


Is there any solution to this?


It seems like an IDE could know if there is a single implementation and could go to it, especially if the interface is not visible outside the project ("internal" in C#). In practice, the interface is in the same file as the primary implementation, so it's not as bad as it seems. More of an annoyance.


Jetbrain’s Rider as well as their ReSharper for Visual Studio have a navigate to implementation feature which I use the shortcut key for all the time.

The fact our solution has an interface for just about everything for testing reasons doesn’t slow me down even in the slightest when I’m looking through the code.


I've seen people learning about dependency inversion from a book called "Clean Architecture", then proceeding to apply it to every bit of code they write, to make it "clean". It makes code difficult to trace by reading alone. Indirection may be cheap, but it adds up.


In my opinion, you probably don't have good reasons.


> I have a hard time seeing anything new here. > Just .. [a] new perspective...

okie dokie


tomato-tomato.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: