Splitting a piece of software into multiple pieces and shipping the pieces (dependencies) independently is sometimes a good idea, but it has its limits. Maybe the limit should be for dependencies which are very stable and used by many packages (libc, etc.). The hard line policy enforced by Debian here obviously is not working. Happy to see other distros solve this better. This might become really problematic for Debian in the future.
It’s easy to guess what happened: they developed an IPv4-only network stack and baked the limitations and constraints of IPv4 into it: private addresses are mandatory, public addresses are scarce, and NAT is required.
Then they got told to “do the needful” and make IPv6 happen, so they did… by weaving IPv6 support through the tangled briar patch of their codebase. They wove it through the NAT, the tiny public address blocks, and the mandatory private address spaces on virtual networks.
The result is IPv4 with a sticker on it with a hand-written label that says “IPv6”.
Data (state, context, whatever) is more important to structure than code. Make sure your programming language lets you structure data the way you feel is natural. The code comes later and the main function is to massage the data. At least this is how I usually think about it, but everyones mind works differently... :-)
I loved directory opus so much on the Amiga that I eventuelly wrote not one but three variants of it for Linux, albeit much simpler. The latest one I still use and maintain and you're welcome to try it at https://github.com/suncore/dflynav
Here's a few observations (after long time experience and involvement in research around technical debt):
1) It is impossible to avoid gathering technical debt. The code will deteriorate in one way or another. You need to prepare to fix it since you can't avoid it.
2) It is so extremely difficult to make a correct "risk assessment" on technical debt so you should avoid doing so at all. You will just end up arguing all day on the merits of clean code/architecture vs feature growth. Instead, keep two backlogs and reserve a set amount of resources on each, e.g. 20% on reducing technical debt and 80% for product features and other development.
3) The "cost" of reducing technical debt is actually negative. That's the whole point of working with it, to increase development velocity.
Code doesn't deteriorate... Like we're talking about a banana growing spots or whatever...
Code gets plastered over with features and abstraction layers, but that's an active process that we are complicit in doing.
More germane for this discussion, technical debt was originally defined as a positive thing that you want to go get... It is the mismatch between our domain model and how our users think about the domain. It is positive because “enough with all of the planning and interviewing and requirements gathering and careful architecting, can I just build something and have my users criticize it and do four or five drafts until I get it right?” ... The debt is the drafts before you get it right, the interest is the constant translation between the language of your users and the language that the system is expressed in. You have a “contracts” table that contains things that are not contracts because every purchase foreign keys to a contract, but your users have since wanted to know what they do with the purchases that are not associated with any contract, where do they go. And now every query that aggregates over contracts needs to exclude the non-contract contracts, this is part of the interest you are paying.
But at some point it started to mean that we had to upgrade our dependencies, that's tech debt, or this kludge that I threw in, that's tech debt, or the fact that we never worked out a shared library between the front end and the back end and so any Python code in utils.py needs to be manually kept up to date with a file utils.js in a separate Git repo so that we are sure that we can do these things both on the front end and the back end. More broadly, anything that we no longer care for is tech debt. And that's where you really get this idea that it is deteriorating, that's more a measure of our own patience deteriorating, especially as we never seem to have time for the refactors we want.
Perversely the cause of not being able to refactor has nothing to do with tech debt and cannot be solved in this way. It's multifaceted so at different places it emerges in different ways, but usually it's an incentive problem. At some places it is that the only incentives are for feature development. At other places it is that every developer is working on a different thing rather than prioritizing one thing that the business needs and delivering it, and allowing developers to use their slack time in this equation to improve the product however they see fit... In yet another it is because the team lead resists any suggestion that the framework being used is too heavyweight for the problem being solved and so trying to keep these very clean abstraction layers is causing people to have to rewrite the same basic thing in five different unrelated places, because it has to bubble up from the data layer into the service layer into the business layer into the controller layer into the API into the consumer layer into the app state layer into the frontend model layer...
> Code doesn't deteriorate... Like we're talking about a banana growing spots or whatever...
It kind of does -- if you leave a codebase alone for a long time, and you come back to it later to upgrade a lot of dependencies (sometimes making a multi-version jump), it's a lot harder than it would have been to keep them updated as new versions were released.
It would have been a lot worse if that log4j CVE had been in a library with a lot more transitive dependencies or makes breaking changes between versions, like Jersey.
One advantage of monorepos with shared dependencies is that even the parts of the code that don't need to be touched very often will still get the latest dependency updates. If those codebases are in standalone repos, they just sit there, and then one day a simple attempt to upgrade a dependency turns into hours of work.
So, it's not technically "rotting," but it's definitely the case that leaving code to sit creates more technical debt later on -- even if that code was perfectly good the last time anyone worked on it.
> Another, more serious pitfall is the failure to consolidate. Although immature code may work fine and be completely acceptable to the customer, excess quantities will make a program unmasterable, leading to extreme specialization of programmers and finally an inflexible product. Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object- oriented or otherwise.
Technical debt is only "positive" in the sense that it may permit shipping something now or earlier, but it accumulates and becomes something that can slow the project team down to a crawl, or worse totally stall forward progress. In the end, unless your project can be thrown away, it's a negative.
By the same token if you ignore all of the good parts about having a credit card, “in the end, unless your credit card can be thrown away, credit card debt is a negative.” There's a valid perspective for this but it's not a great one.
It's not just that it permits something now or earlier, but in shipping something now or earlier it can also increase the quality of what you are shipping. Ward saw it as the center of his preferred style of coordinated software development, XP. Sort of the old Daoist idea that the wheel needs the hole at its center, the negative enables the positive.
Original sources by Ward Cunningham are not too hard to come by...
> I became interested in the way metaphors influence how we think, after reading George Lakoff and Mark Johnson's Metaphors We Live By. An important idea is that we reason by analogy with the metaphors that have entered our language.
> I coined the debt metaphor to explain the refactoring that we were doing on the WyCash product. This was an early product done in DigiTalk Smalltalk, and it was important to me that we accumulate the learnings we did about the application over time by modifying the program to to look as if we had known what we're doing all along, and to look as if it had been easy to do in Smalltalk.
> The explanation I gave to my boss, and this was financial software, was a financial analogy I called the debt metaphor. And that said that, if we fail to make our program align with what we then understood to be the proper way to think, uh, about our financial objects—then we were going to continually stumble over that disagreement: and that would slow us down, which is like paying interest on a loan! With borrowed money, you can do something sooner than you might otherwise, but then until you pay back that money, you'll be paying interest.
> I, uh, I thought borrowing money was a good idea, I thought that rushing software out the door to get some experience with it was a good idea. But that of course you would eventually go back, and as you learn things about that software, you would repay that loan by refactoring the program to reflect your experience, as you acquired it.
> I think that there were plenty of cases where people would “rush software out the door,” and then learn things, but never put that learning back into the program. And that, by analogy, was borrowing money thinking that you never had to pay it back. Of course, if you do that, say with your credit card, eventually all your income goes to interest, and your purchasing power goes to zero. By the same token, if you develop a program for a long period of time by only adding features and never reorganizing it to reflect your understanding of those features, then eventually that program simply does not contain any understanding and all efforts to work on it to take longer and longer. In other words the interest is total—you'll make zero progress!
> A lot of bloggers at least have explained the debt metaphor and uh, confused it, I think, with the idea that you could write code poorly with the intention of doing a good job later... and thinking that that was the primary source of debt. I'm never in favor of writing code poorly, but I am in favor of writing code to reflect your current understanding of a problem, even if that understanding is partial.
> You know, if you want to be able to go into debt that way, by developing software that you don't completely understand, you're wise to make that software reflect your understanding as best you can: so that when it does come time to refactor it's clear what you were thinking when you wrote it, making it easier to refactor it into what your current thinking is now. In other words, the whole debt metaphor, or let's say the ability to pay back debt and make the debt metaphor work for your advantage, depends upon you writing code that is clean enough to be able to refactor as you come to understand your problem. I think that's a good methodology, it's at the heart of extreme programming... The debt metaphor is one of many explanations of why extreme programming works.
Your specific codebase may not, but the frameworks, OS, and hardware it's running on evolves and changes overtime. Software practices change over time as we find more effective ways (hopefully) to do things. If you expect to let code sit and come back to it in 10 years and think that everything will be hunky-dory, I've got a bridge to sell you.
> It is the mismatch between our domain model and how our users think about the domain.
...but the business domain continues to evolve. A new client comes in, another software system is onboarded, the organization is restructured, new regulations are inacted that need compliance. I think the idea that technical debt only exists because we didn't "perfectly match" the domain and our model at the start is a bit short sighted.
> But at some point it started to mean that we had to upgrade our dependencies, that's tech debt, or this kludge that I threw in, that's tech debt
The broadest definition of technical debt that I've seen boils down to technical challenges that impede developer progress that does not have direct concern to the business. One of the reasons that I think that it frames a purely technical issue (i.e how the debt got there and how it needs to be resolved) with something that the business wants (rapid development). While there are a multitude of challenges in resolving technical debt, a significant one is getting the business to allocate time to resolve it. In order to do so, there needs to be some value for the business, not just the developers.
>More broadly, anything that we no longer care for is tech debt. And that's where you really get this idea that it is deteriorating, that's more a measure of our own patience deteriorating, especially as we never seem to have time for the refactors we want.
If you're defining technical debt as "anything you don't like", then you don't have a real understanding of your technical debt. One of the key pieces in the broad definition above is that the debt "impedes developer progress". Old code bases have plenty of things that modern developers don't like, but it works and it doesn't impact the application overall. It's old, but we barely touch it, so it's age or style is of little consequence. If you don't know what technical issues are causing signficant issues for your team or your codebase, then there's little justification for "paying it off". If you can sell the business on it, then you tend to get into long refactoring projects that provide little value than inflating developer egos.
> The code will deteriorate in one way or another. You need to prepare to fix it since you can't avoid it.
Lehman's laws of software evolution 1 and 2:
"A system must be continually adapted or it becomes progressively less satisfactory. As a system evolves, its complexity increases unless work is done to maintain or reduce it."
This is great, especially 3) which implicitly makes a business case for reducing tech debt (useful for communicating with the rest of the org), and also helps steer us toward debt that actually matters.
Ok, I'm biased since I was a kid during the 80s but two things strike me:
1. I still love the design language of the Walkman with brushed colored metal. Why can't we have that today?
2. It strikes me that a lot more thought and engineering has gone into these products compared to today. Most things today are just cheap all over with pointless design without function.
I was a bit too young, got into music around the time of mp3s, but the Walkman always looked insanely cool. It's one of those rare things that makes you sort of regret technological progress.
It's kind of funny that in one post, one wants "brushed colored metals" and then complains about the modern, pointless design without function.
I'm not sure what general line of products you're talking about, but devices like phones and tablets surely have orders of magnitude more "thought and engineering" in them. The fact that you don't like the tradeoffs they provide doesn't mean they weren't engineered and considered.
I know what line of product he's talking about. He's talking about phones that look like seashells and luxury soaps, suited for handbags and walks with a French bulldog.
And on the other hand, just because someone thought about a design doesn't make it good. (To the degree that one believes design can be good/bad and it's not all subjective)
My introduction into the Walkman was the WM-F5, one of the yellow sport models with rubber buttons. That was pretty iconic to me, and the other versions all looked pretty flimsy in comparison.
I absolutely loved Sony's "Sports" models. I guess it was just a little extra peace of mind for me; it's not like I really abused my stuff. But it was nice to know (or at least think!) that it would take accidental abuse.
I wish more companies would do something like that: produce a "Sport" model of their regular things... take the regular model, make it a little more rugged, charge a little more.
I think a truly rugged iPhone or flagship Android device would sell in a major way. Clearly it's something a lot of people want: look at all the people that wrap their phones in big fugly rugged cases.
I've read rumors over the years (from Daring Fireball, etc) that Apple has been toying around with a rugged "Explorer" (surely a working title + reference to the Rolex Explorer) variation of the Apple Watch for quite a few years. I hope they pull the trigger, I hope it sells, and I hope it starts a bit of a trend.
Apple would be in a very unique position to pull something like this off with their Macbooks, too. Traditionally "rugged" laptops have made major performance compromises. After all, performance = heat = airflow requirements, so you can't have a sealed laptop with top of line performance, unless you stick some giant cooling fins on it or something. At least traditionally.
But Apple's M1 chips show that performance doesn't have to be compromised for a fully sealed design.
VR goggles are too anti-social on the local scale. Have kids? Dog? Spouse? Hard to see they will accept that you shield yourself off completely for long periods of time. You won't even see them coming when they start to grow tired... This is a fundamental flaw.
Then the metaverse... What is the usecase? Why is it fun or why is it useful? It might be cool for a while, but so are lots of other MMOs out there...
I can only speak for myself. I used to play Second Life when I was a teenager. A lot. What it was is less of a game. There was no grinding, no objective, and more of a chat room with a built in 3d modeler and scripting language. Everyone I played with grew up either ended up as a programmer or a 3d modeler of some stripe.
And I see the same sorta energy in people playing VR chat today.
What I think you miss as the usecase here is people who aren't quite happy with their identity, or are and choose a different more extreme representation. The difference between being yourself and being a fox for example.
Plenty of people in the world today are lonely, and online connection is what fills the void. They don't have kids, dogs, or spouses. They're a sixteen year old in the middle of nowhere, or a socially anxious programmer living in a shoe box.
Been there and done that. Ended up as a game dev with zero social skills when it came to relationships. I was deeply unhappy about that situation.
It wasn't until years later I (somehow) managed to form a relationship with someone - who is now my wife and mother of my child - that I'm far, far happier. From my experience, VR and Second Life et al are not the answer. Getting out there and experiencing life reaps far more benefits.
> aren't quite happy with their identity,
I found the best thing to do is find like-minded people IRL.
I've been in parks and restaurants and seen whole families all on their phones. Last weekend I walked my dog around the park path, and was literally the _only_ person there who was not walking with their phone held in front of them.
The Black Mirror / Surrogates dystopia is already here, and most people can't get enough of it.
I think the main difference (benefit?) is that Nim has the convenience of a garbage collected language while retaining good and predictable performance (e.g. via ARC/ORC and thread-local GC). Rust has no GC so requires more from the programmer. Personally, I'd rather use a GC language to have less complexity in the code even though I have to spend some more time learning to use the GC correctly.
Nim's ARC thing is pretty different from blackbox GCs you have to poke with a stick like Go's or JVM's. It's pretty deterministic where additional calls are inserted by the compiler, and you can use --expandArc or look at the generated C.
As a single anecdote -- I've studied Rust before and it's been a while. Pretty familiar with C++ move semantics stuff. I decided to read the Rust book again while also learning Nim simultaneously; and I had an entt wrapper and graphics rendering in Nim without GC before I finished getting through the borrow checking chapter in the Rust book.
It really depends on what you're doing. If you've developing an OS or web browser or game engine you're going to be having to think about memory management very carefully in any event and dealing with Rust's lifetimes is just formalizing the work you'd be doing anyways. But most applications aren't optimized that heavily and as long as you don't have a memory leak or an algorithmic pessimization you don't have to worry about performance in most cases and having a garbage collector is saving you a lot of work.
People do game engines in Nim. ReelValley was even a commercial effort. (This is only to supplement the parent comment, not contradict.)
Your mileage may vary, but several times I've tried some single threaded thing in both Rust, C++ and Nim and the Nim came out faster (e.g. [1] had final Nim version 5.0 ms, C++ 27 ms, Rust 42 ms), not putting much effort into any, and is likely more readable to someone brand new to the language. Writing generic data structures & algos in Nim is also a true breeze/pleasure. Anyway, they are all "fast and maybe-safe by default" and all respond similarly to optimization care. There is no obvious performance disadvantage (and compiling times are much better in Nim, yielding scripting language-like code iteration).
Rust algorithm is only one way to do automatic memory management. And regarded as being very restrictive. The scheme Nim implements (ORC), which is innovative as well, is more permissive and unobtrusive. I hope in the next year it becomes the default.