Hacker News new | past | comments | ask | show | jobs | submit | trommlp's comments login

I don't know all the reasons individual developers would want a more complicated piece of software. Might be to show off, yes, or to learn about a new technology to the detriment of those developers that come after and the Ops people who will have to maintain it. Or they genuinely believe complicated is better.

But the one thing that ultimately adds a lot of unneeded complexity to basically every software project at my work are requirements. That kind of requirement that nobody thought of when the project kicked off. And this kind of requirement that a few months later everyone realizes nobody actually needed. And that you possibly remove after a while. But in the meantime this feature influenced your entire data model or worse.

This is the point where your previously fun work turns into a slog for the duration of said project. Well at least we can still have fun with our hobby projects!


When I look back, I see my "overengineering everything" era as a phase, a live lesson to be learnt. I overengineered a couple of things massively because I didn't know what lied ahead and that helped me, but then I learnt that it's harmful as the "default mode of programming".

As I understood how performance worked, what's maintainability and minimalism (as a general concept), the code I write became minimal and simpler.

Currently I'm working in a mode which can be called "overgrow and extend". I start with the most simplistic code which satisfies the design and my requirements, then refactor any part which cracks under the design to satisfy the requirements without being hacky. This philosophy works like a charm for me. Coupled with my "needless comments", getting back into context and extending/maintaining the code is easier than ever.

I even started to finish and deploy my personal projects. It's that effective for me.

Now I'm learning live documentation, which allows me to write the documentation of the things I'm working on simultaneously. When things finish I also have a workable documentation as a result.


You are of course right that developers have to learn to keep things simple and that skill comes from overengineering stuff. Maybe I have been overly negative on my original comment. As long as it's just a "phase" the long-term benefits of going through that phase should be much higher than the short-term costs to your org.

Good on you for having success with your personal projects!


I think you mention a very valid point that is worth reiterating: Even when you enter territory where HTMX gets cumbersome for some reason, you can still use custom JS or some other library to work around it.

I admit that I initially started to use HTMX to avoid JS, but I am now more comfortable than ever before to fall back to some lines of Javascript in the few cases where HTMX does not feel like a great fit to solve the problem at hand.

Another great side effect (that you also mentioned) is how much cleaner the project is structured now. But I also realize that this might not apply to big projects.


> I admit that I initially started to use HTMX to avoid JS, but I am now more comfortable than ever before to fall back to some lines of Javascript in the few cases where HTMX does not feel like a great fit to solve the problem at hand.

Many people don't realize that great engineering happens under constraints. When you're faced with an empty project and the full power of JavaScript, you have 5,000,000 ways to do something and the chance to make a series of wrong choices that back you into a corner is high. By contrast, when you're working within a system that constrains the available choices, the possible design paths are considerably fewer and so the system becomes more understandable and maintainable, which makes for a very straightforward, comfortable dev experience.


The same goes for great design, of course figuring out the constraints is the trick with design - when there aren't really the same hard hardline guardrails that programming languages have; I suppose platforms like Figma attempt to create relatively rigid rails to follow.


"The absence of limitations is the enemy of art."


I wholeheartedly agree with your last sentence that is seems overengineered, but one has to assume that there are use cases that warrant such a complex auth scheme. I cannot speak to that as I have never implemented systems of massive scale.

However one thing that puts me off is that it slowly becomes the default way of implementing authentication and authorisation at work where our internal (web) services are (at best) used by 10s of people on a given day. Besides the "old" way of auth integrated much better into our existing system landscape while the "new" way requires (in our case) a Keycloak server.

Again, all that would be fine if the use cased warranted the complexity, but in my employer's case it does not.


A session cookie from app1.domain.com isn’t readable by app2.domain.com. So with plain cookie auth, you have to login to every app individually. Is there a simpler way around this than OIDC?

And what if you want to gate your services to only those users who have an org-issued yubikey? With OIDC you can delegate the device check to a single host (your IdP) and if apps speak OIDC they’ll be protected. That means MFA SSO!

Is OIDC the wrong tool for this job?


I assume from what you are describing that OIDC is probably just the right tool for this job.

In my problem domain (think internal apps that serve many different purposes with little overlap and a diverse set of users) logging in individually to every single app simply is not an issue, albeit a little annoying).

That being said we do use a single authentication backend across all apps, just not one that is capable of OIDC and is thus a lot more limited in what can be achieved with it.


I agree to some extent on it being a social problem. But I am not sure that developers are to blame outright. What seems to be unpreparedness or incompetence might just be the result of desperately trying to produce something from incomplete information.

That is sometimes the people that request the product are just as bad in communicating what the result should be and it might even change half-way through the process. Its hardly surprising that the result is "horrid".

So the root of the "social problem" could be anywhere really.

That being said there are definitely cases (as in your last paragraph) where you cannot seem to agree with other developers because your understanding of what makes a developer is just so radically different and you keep talking past each other. I would be surprised if that was not the case in every other conceivable profession, though.


I really enjoy using a microframework and multiple small libraries to build a project instead of a huge, opinionated framework for the reasons you mentioned above. Glueing different interfaces together within my own "business logic" can be arduous sometimes, but the flexibility usually pays of when some new unforeseen requirement comes around the corner.

That being said I have settled for this simple setup when it comes to web applications. However its quite interesting how I seem to forget the insight I should have gained from all this framework-back-and-forth when I enter a new problem space and inevitably choose the complex, batteries-included framework.

This is why I think there is a use-case for every kind of framework out there. The big ones take the load from your shoulders while you are still learning about the problem. And if you so choose and the circumstances allow it, you can replace it later with some framework that is simpler at its core, but possibly easier to maintain, because you are able to reason about every aspect that made it difficult at the beginning.


I think calling it the Hall of Fame/Shame is unjustified.

I certainly agree when we talk about hardware products, because having very short support periods for those just seems wasteful.

But it gets difficult when entering software, because it probably differs from user to user how fast they would like the software to evolve. I can think of software that I want to be boring and work forever the way it does now without changing (except security bug fixes ..). So support periods are long and consist only of patch releases. And then there is software where I am waiting eagerly for new features to be added and would like the maintainers/the company to focus their attention on it if possible, thus taking resources from the maintenance side of things and moving them to the feature-factory. So support periods may be short and major/minor release bumps happen frequently.

Having one of these products in the Hall of Fame and another rot in the Hall of Shame does not seem to add any informational value.


In the case of commercial products it is especially frustrating if you need to log in to some customer portal in order to find such basic product information. And even if you create an account and log in it is not guaranteed to find this information. I can relate to a certain extend as some companies might not even know when they plan to end support for a product, but even this is information that is worth sharing with their customers.


Fortigate is one such vendor we already cover, such information deserves to be free not behind login or paywalls.


Thank you for creating this.

We keep track of EOL dates in our internal wiki with links to various websites. Would be nice to reduce the scatter and just link to this site!

Would you accept Pull Requests that add server hardware EOS and EOL dates? They are especially hard to find and it would be great to have them in there as well!

Edit: I should have taken a look at open issues first, it seems there is tracking issue for network equipment so I guess adding server hardware is certainly not off the table (https://github.com/endoflife-date/endoflife.date/issues/1387).


I agree. I use server-side rendering heavily in my projects at work and thus largely avoid custom Javascript (edit: the reason is laziness, not because I outright disliked Javascript). But if I want to enable some very basic user interactivity that does not alter "state" in any way, I will always go for a few lines of JS that e.g. toggles the visibility of some items when a button is clicked etc.

So (as usual) the key is to find the balance that works for you instead of pursuing one way (only server-side rendering or only SPA Javascript framework).

For the time being this approach works very well for me.


I would like to mention maud in this context:

https://github.com/lambda-fairy/maud

It is refreshingly different from other Rust templating libraries. It uses a proc-macro that compiles your HTML into Rust code. I also happen to use it in conjunction with HTMX and it works very well for me (at least in small projects). You will still have to learn a little bit of "special" syntax though, which you are looking to avoid if I understand you correctly.


while it looks nice, I’d prefer html and not to learn yet another DSL, the reason is that migrating is easier between template engines.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: