Hacker News new | past | comments | ask | show | jobs | submit login
The software industry is going through the “disposable plastic” crisis (lwn.net)
360 points by ColinWright on Aug 20, 2020 | hide | past | favorite | 285 comments



People blame developers but it's all driven by a product mentality that favors rapid iterations and technical debt to run business experiments on customers. Slow-and-steady, carefully written software isn't tolerated within many product orgs these days.


I am really hating working in the current state of the industry right now. I am used to writing robust, clear, well-tested, and easy to maintain systems.

The job right now seems just stitching AWS services together, and spending the rest of your time debugging and putting out fires.


The lie we tell ourselves is that the quality of code matters to non-engineers. It seems it doesn't.

The most uncomfortable truth of our field is that there is no floor for how bad code can be, yet still make people billions of dollars.

Because that's the outcome everyone else is seeking - making money. They don't care how good the code is. They care about whether it's making money or not.


Well, you're not saying it right. When a non-programmer (and even some programmers) hear "bad code", they start thinking in terms of aesthetics. In fact, "bad code" is sometimes just called "ugly code", making the analogy worse. Fast, responsive code is probably "ugly" to some people, but beauty is in the eye of the beholder. Maintainable code is so verbose as to be considered "bad" to some on first glance, but proves its worth when the requirements change. Rather than talk about code quality, we should be harping on product quality, which can only be achieved through consistent software (dare I say?) engineering practices - which, if followed consistently and unapologetically, will accidentally lead to "good" code.


Haha the non-engineers at my place don’t care at all about code quality.

The non-engineers also decided to write their own internal tool, with no review and zero attention to quality. It’s been outputting wrong results that will cost them a few million at least.


Reminds me of the job where we'd have huge slowdowns in our massive SQL Cluster because some non-engineers in "R&D" "had" to run unoptimized, barely legible Access queries against it.

We eventually had to create a replica of Production specifically for such nonsense.

Well then we moved a lot more business logic that they "helped develop" that ran on a distributed platform closer to "real time" for the application, because it was very dynamic in nature, and couldn't be denormalized back into the DB and remain performant. So I had to build a new system for some of their ad hoc queries and teach them how to use it.

What eventually chuffed me was realizing the "R&D" person I was "training" on the system always wore a different three piece suit every day and at least in fashion signals was probably making a magnitude higher salary than me for slower, wronger answers than any system I worked on, but probably was great at throwing it (very slowly) into very fancy graphs in Excel and useless salesmanship.

There were a lot of reasons I already wasn't long for that job at that point, but that was a lasting lesson that quality and performance won't matter to certain styles of upper management.


I have totally accepted the fact that quality of code does not matter to stakeholders. That actually makes sense. No one got an infusion of VC money because they used Node and the LATEST libraries. They don't care what's under the hood.

What bothers me is that robustness and maintainability of code no longer seems to matter to engineers.

Believe me, the pressure right now to deliver working code no matter what is much less than, say, in early 2000's, when web development was very new and few people knew what they were doing.


this so rings true it hurts

just imagine if automobiles, aerospace or construction was run like that.... (hint, it was in the early days)


You try to explain the importance of testing and good code quality to your higher ups only to be retorted with "but we need this to be out in X weeks!".

And then when a production bug is revealed, they then angrily ask "why are these errors occurring?"


I've seen the reverse happen at some companies.

Managers getting taken advantage of by allowing people to do anything in the name of 'technical debt' 'code quality', resulting in more bugs, lots of needless refactoring, tons of layers of abstraction and tests that actually do close to nothing.

Result is they are then surprised it takes a long time to develop new features where competitors do it faster and eat their lunch.

There needs to be some sort of balance, companies need to make money to allow for proper code to be developed, second is also valuable as long as it makes sense financially as well.


> allowing people to do anything in the name of 'technical debt' 'code quality', resulting in more bugs, lots of needless refactoring, tons of layers of abstraction and tests that actually do close to nothing.

This heavily points to dysfunctional development team with fault being primary on tech leads making more and more mess the more they code.

> companies need to make money to allow for proper code to be developed, second is also valuable as long as it makes sense financially as well.

This one is merely about putting less resources on development. I don't think it fixes the dysfunctional team above, it is just avoid the issues. There is nothing that would make developers write useful tests. Nothing that would change tech leads, nothing that would prevent unmaintainable code appearing again.

Had the constantly refactoring team ended with very good code for too much of price, then cutting resources is likely to lead to better trade off. But that is not their case, their case is that they don't know how to write good code.


> This heavily points to dysfunctional development team with fault being primary on tech leads making more and more mess the more they code.

You are correct.

> I don't think it fixes the dysfunctional team above, it is just avoid the issues.

Sometimes avoiding the issues is good. My point is that some developers cry wolf too often, focusing them on solving problems and shipping features may yield better results.


Or when you tell them to get to that in X weeks that you can't guarentee it against breaking bugs.

Later "WE CAN"T PUT THIS IN FRONT OF A CUSTOMER"


I have PTSD just reading this.


I have always wondered if there can be an AWS for employees.

X1 dev, X1 designer, e2 manager, etc. You can autoscale your company through jira integration.

You will save on health insurance, benefits, unproductive time, salaries etc. Only aws will need to cover that but those employees can work and scale for hundreds of companies monthly.

You can have your own college and production line for people out of school so you can grade.


"dev", "designer", "manager", etc. aren't interchangeable machines in the same way that compute instances are; they're human beings! Would you personally work for this "AWS for employees", with no job security, with no consistent working relationships with other people, with no knowledge of why you are doing the things you are tasked besides "the algorithm said I must do it, so I do it"? I know I wouldn't.

Fortunately, this is a horrible idea that has essentially zero chance of becoming reality, as things like stable work relationships and a sense of purpose boost productivity, making employees more productive per unit labor cost than a Mechanical Turk for making more Mechanical Turks.


Isn't it basically outsourcing outfits?


Yeah, but "outsourcing outfit" won't get you billions in VC funding the way "AWS for employees" will.


This sounds like the story Manna [1] by Marshall Brain.

In the first half of the story, employees are increasingly micro-managed by a centralized software program which becomes more sophisticated and ubiquitous over time until humanity becomes fungible.

[1] https://marshallbrain.com/manna1.htm


> The job right now seems just stitching AWS services together, and spending the rest of your time debugging and putting out fires.

This is exactly what our engineers do, too! I am kept motivated by what we are able to do with what we've stitched together. But yes, knowing it is a bunch of plastic garbage underneath is... depressing, almost.


>I am used to writing robust, clear, well-tested, and easy to maintain systems

Maybe then go into the HPC/Defence/Air/Space/Masstransport-Industry, they will love you, no BS thats exacly what they want and need.


You’ve spelled out exactly how I feel.


Just means you gotta work higher up in the food chain. You need a new job. Build what other developers eat.


If you build what other developers eat, doesn't that mean working _lower_ in the food chain?


The plankton shall inherit the ocean.


Real food chains are cyclic, a bit like in https://en.wikipedia.org/wiki/On_Ilkla_Moor_Baht_%27at

Bad metaphorical usage of the term "food chain" is an example of "office speak". Personally, I don't much like it.


I think both are correct.


Great, another fad library to use. That's definitely what we need.


Build what other developers eat.

I am interested in this topic, can you elaborate? Or any thoughts from anyone?


I think OP is referring to things like working on cloud infrastructure, systems software like web servers and reverse proxies, programming languages, or foundational libraries and frameworks libraries—all of which are consumed by application developers


Or have my job, switching periodically between on all that stuff and the end application. Make sure the tools you make are actually useful for the problem they purport to solve!

We just refuse to use vendor-lock-in cloud crap as a matter of principle. Regular server processes (though Haskell), regular machines (the NixOS), PostgreSQL, no problem.


Must be nice, I'm surrounded by zombies that are either RDD or truly naive. Most of the ecosystem this org works with really doesn't make sense for "cloud." Most of it should be internal infrastructure for an entire host of reasons aside from occasionally using EC2 instances for special case bursts of computing demands. Instead theres an obsession with AWS with no solid rationale.


Sorry to hear that! It's a real shame when people contort themselves to give Bezos more money.


How about AWS Lightsail instead of EC2?


> robust, clear, well-tested, and easy to maintain systems

I wonder if that's how the implementors of software that actually lasted for decades would describe their work or motivation.

Does this describe Unix, for example? In particular the old AT&T version, not GNU (or even modern BSD) code.


Research from 1995 [1] indicates that robustness was not a strength of AT&T UNIX.

"The reliability of the basic utilities from GNU and Linux were noticeably better than those of the commercial systems."

[0] ftp://ftp.cs.wisc.edu/paradyn/technical_papers/fuzz.pdf

[1] ftp://ftp.cs.wisc.edu/paradyn/technical_papers/fuzz-revisited.pdf


May of the design philosophies laid by smart people like Eric Raymond are very well documented:

https://medium.com/programming-philosophy/eric-raymond-s-17-...

https://en.wikipedia.org/wiki/The_Art_of_Unix_Programming


> Does this describe Unix, for example? In particular the old AT&T version, not GNU

At least to me. Having all these composable commands, unimaginable through how many iterations all that stuff went. I also think that Go was created in that spirit, with robustness/conciseness being more important than anything. Although it seems the language and its philosophy is adapting to more common ways of doing things.


it's funny because this looks like badly organized workplace

in a group of skilled people it can still happen

is it due to bad leadership ?


> it's funny because this looks like badly organized workplace

You probably should get out of the HN bubble more often, a badly organized workplace is the norm not the exception.

(I hope it doesn't come as condescending since it's not my intention)


It seriously came as condescending since I'm quite removed from HN bubble. At least you warned me, thanks.

My comment is not about the norm but about the nature of the work structure. See I've mostly been working in average (non IT) places. I can understand average salesman and clerks having troubles organizing, dealing with computer issues, bad software, and cracking under managers because they don't have Masters degrees in abstract algebra applied to relational databases. So I assumed that in an IT company the skills would influence the structure. I guess it boils down to low level politics as usual.


Sorry English isn't my mother tongue, so I didn't see a way to make it short and not condescending hence the warning.

May I interest you in one of my earlier comment to show you how dysfunctional a company can be (to put things in perspective : my boss was a software engineer before becoming the boss, our company has something like 25 millions € per year and the certificate should cost a few hundred euros) : https://news.ycombinator.com/item?id=24220464

Dysfunctional indeed.


yeah it seems one of the most common issue.. greedy upper hierarchy causing pain everywhere below

#revolution


Governance and lack of a dictator with sane "technical debt to income" ratio. Unless you're NASA, you're not going to get there with a smart committee. You need a single person that can make the right technical decision 80% of the time and business decision 120% of the time.


Definitely the case, people don't realize how much a weak link in the chain can break the whole structure, and just how much influence the command structure can have. I've worked places where all my peers were very smart and well-qualified and still ended up outputting what boils down to smart trash because a middle manager somewhere refused to allow them to structure things well. Conway's Law isn't dependent on the group's collective IQ.

A little sensitive on this due to a recent experience where I was hired and given "ownership" only to have my very modest proposals for changes shot down if not just entirely ignored. Some "ownership"!


The result is clear, obvious code anyone can step into that is never optimized or tuned much for performance because you know some green college grad might be the person who inherits the code and you'll be under some other project's deadline and unable to help them figure out anything optimal, so it's left simple and obvious.


I don't think many people spend much time on performance anymore in creation stages. Optimizing performance nowadays is purely reactive, I find.


Spend time working on any high compute application, and you'll spend cycles on it at every stage of the development and maintenance cycle.


Slow and steady is not useful if it's too little too late. I deal with so many broken websites, products, working with customer support that it is amazing how slow organizations are at reacting to customer needs. Sometime it takes 6 months before simple bug fix finally shows up in product. We are living in fast changing world and one need to weight getting things out now vs being perfect. The thing that matters is not eventually releasing perfect software but rather can you iterate fast enough to get there eventually?


It takes 6 months to get those things out the door because teams are so busy dealing with fires and other technical debt that has built up from all the previous “do it dirty now and we’ll fix it later” features. Eventually you’re buried in keeping things afloat all the previous mistakes and just can’t move forward on anything.


My experience matches this. I was there when all the “move fast and break things, we’ll fix it later” became bugs and bad situations that suck all the energy from adding anything. Several years later, nothing can get fixed, it’s all fighting the fires. And it was all done following fashion, there was no real need.


Meanwhile, you've got management demanding more features and getting upset when you say "we can add it, but we need time to fix the mistakes of the past"

"No, there's no time for that, MVP, get something out! We'll fix it later!"


Investors don't mind bugs as long as the money keeps rolling. Customer experience is a far second.


On same line, project owners/managers don't mind bugs as long as their directors happy. Directors don't mind bugs as long as Sr Directors are happy and chain goes on...

Truth is nobody cares for customer...

In software industry, cost of bug is mostly nothing compared to other industries like construction. Products are not always life-crucial. One person with laptop can build you product. In this context, engineering itself is secondary. Decisions are made on hype, not on reasoning.


Which would be acceptable if they accepted the blame when customers stop buying and the money stops rolling.

But no, obviously not. Obviously it's bad engineering then.


Higher churn leads to more features being crammed in which leads to more bugs and increased technical debt.


The self imposed complexity of brittle aws microservices/lambda stringed together is definitely the fault of developers. There are a handful of companies that need (and can afford!) architecture like that (maybe, Uber is backpeddling on it) but the vast majority of the world is shooting themselves in the foot and that is on developers and resume driven dev. Nothing to do with faster iterations, although that is probably how it is sold to management. The developer should know better.


The resume driven development is a consequence of poor hiring practices that discriminate candidates on the language/framework hype instead of hiring based on experience, know-how and potential.

Our company is looking for an experienced dev with Node and AWS experience. What's that, you have 15 years of industry experience but it's all on-prem Java/PHP? Too bad. NEXT!


Spot on. I always used to get a certain company local to me flagged up by recruiters when I was job hunting. Every time they rejected my resume without interview because it did not have "Spring" on it.

Years later I worked at this place as a 3rd party contractor. What little bit of Spring I needed to know I picked up in the first few days. Their codebase and methodology was very poor and they were drowning because they only had one or two experienced people... but all the people they had had "Spring" on their resumes.


I think it's also a consequence of developers just getting bored. I don't care so much about my resume, but I'm always eager to try new things.


In the micromanaged world of agile, ticket velocity is more important than any other metric. At least everywhere I've worked.

Open source is the only place I regularly see high quality code. There the devs are allowed to love their code like pets not cattle.


I think this is an over simplification. Software teams are forever trying to find a balance between getting value to customers and getting the code, tests, architecture, docs, etc right.

If you have nothing to sell you’re done. If your legacy cruft becomes overwhelming you’re done, but likely via a slow death.

Good teams are always struggling with this balance and there is no way to be right, but hopefully you can find that balance and be good enough.


I very much agree.

I get that “done is better than perfect” but it feels like we’ve swung so hard in that direction that we seem to be largely neglecting stepping back and building the rough roll for the job in all but the largest places who have the luxury of having enough resources to assign a big enough team in the background to do it.


There seems to be this version of “done is better than perfect” thinking where people think that aiming for quality in any form will get in the way of getting things done. Where this gets especially pathological is when people assume that the best way to get things done is by explicitly decreasing quality, the underlying flawed assumption being that decreasing quality can be directly traded for a shorter schedule.


Go spend some time with PLC/automation engineers and you'll find the same thing.

Safety stuff? Done then right way with the proper attention.

Rest of the production line stuff? Hack on hacks in order to get things online ASAP. At least that's my impression from the outside view I've seen.


It really is the worst. Machines custom-built thirty years ago with documentation that reflects about 25% of the current configuration. Processes that rely on five lines of code and run great for six hours before completely breaking for no good reason. Windows XP boxes that've been on continuously for five years because the moment they shut down a boiler fails that requires a 24-hour restart process. I'm incredibly glad I got out of that industry.


A 24-hr restart plan, sounds like a properly documented quality management process for a large boiler? Disconnecting the "Windows XP box" is removing the control system correct? Not really the same thing is it? There's no reason the XP Embedded hardware platform could not have been engineered with high availability in mind? It's not as if there are no comercial-off-the-shelf options here.

The company who chooses to let the bosses son, who built custom PCs in high school, design the critical process control systems, is I'd say, outside the scope of the argument here.


It was literally an e-machines desktop.


I'll just add at my current place with clustered Windows 2008 back-end, with AD and DFS, and distributed flash based , fan-less, Windows 7 embedded clients we've had zero down time in 8 years or so since it was implemented. End-Nodes provide serial data from the DFS shares to the machines. I'll grant you that is certainly easier than redundant real time control issues, a little bit. But. It's just all about planning for sure, on the software, hardware, and IT infrastructure end of it.


Haha. Been there. Seen that. I do feel for you though.


The uptime on that machine is longer than the time between new versions of windows. This is an example where software isn't like disposable plastic.


I had a discussion with one our devs just yesterday about this. He had the exact opposite notion that some devs are too focused on getting things perfect and don't care about the value they deliver to the business. I guess it's crucial to get the right balance.


It's both, and then some. Product folks surely do love to get new features out. Developers also love it, because writing new code is far more fun than improving reliability, performance, security, or operational load for old code. In 30+ years I've worked in all sorts of environments, but accumulating technical debt has been one of the few constants. Everyone tries to say the right things in meetings. Maybe tech-debt reduction even makes a brief cameo in early planning sessions, but good luck getting it into the final plan. Even if that happens, guaranteed that it will show up as a miss at the end of the cycle, because it's always the very last thing on everyone's list and everyone's list is always too long. A dozen companies, maybe twice that many projects, twice again as many architects/managers/whatever, always the same story.

As for "and then some" that's the part that's on senior management and HR, for setting up review/promotion structures that actively incentivize sloppy engineering. When all of the rewards are for New Shiny, some people will actively sabotage efforts to spend time any other way. Nobody wants to delay their new-feature effort to deal with new test failures or cleaned-up APIs. Even those who have enough of a conscience to to the right thing get kneecapped at every turn.


Slow-and-steady, carefully written software isn't tolerated within many product orgs these days.

Rapid iteration is driven by customers not actually knowing what they need. There are places where "slow and steady" development would work, but very often you'd just find out what the customer needs isn't what they've asked for over a much longer period. You'd end up throwing out a lot of the perfectly crafted artisanal code anyway just as you would in the rapid dev version. The main difference is that the customer would have run out of money by then, and failed, and you'd need a new job.


I would have guessed the cause as HR recruiting and ladder climbing related. The latest hotness being needed to get a higher paying job elsewhere as opposed to just going with what works.

Disregard for technical debt is certainly a force.


I have been asking my product managers time so that I can update libraries and what not. Nope.

All these MBAs and non MBAs and Safe and Agile managers, they are all talk but dont allot time for issues that helps in better code-quality.


Don’t ask, just put it in the next estimate.


>>Slow-and-steady, carefully written software isn't tolerated within many product orgs these days.

The problem is that this works only for large clients and conservative problem domains (such as accounting) where things change at a glacial pace. Back in the day, only such clients could afford enterprise/custom software, so using a waterfall methodology to develop that software worked well enough. Took five years to deliver the final product? No problem because the set of requirements you collected five years ago and based your software on probably still apply at the time of customer sign-off.

Things have changed. Software has gotten much cheaper to develop, which means it has become way more affordable, and the types of businesses who need that kind of software are in heavy competition with each other and their problems and needs change at breakneck speed. So your best best is to have your own software development practices to match that speed.


There are lots of laws and regulations around building to prevent e.g. bridges falling down.

So just remove them. Now buildings have gotten much cheaper to build, which means they have become way more affordable, and the types of businesses who need that kind of building are in heavy competition with each other and their problems and needs change at breakneck speed.

Failing that, keep the regulations, and apply more to software. The "needs" of business will change, and competition will move. "Slow-and-steady" needn't be glacial, and there's a lot between that and "breakneck speed".

Plus there is a difference between time it takes to deliver, and frequency of requirement collection - your example implies you get requirements/sign-off exactly once before final delivery (5 years later). If we contrast "final" delivery with intermediate releases - why are the requirements not re-evaluated at each release time? You can still have slow, steady development, with lots of feedback - there just has to be an understanding that software delivered long term should not be interfered with short term concerns.


Are we talking about tools or enterprise software? Slow and steady carefully written software never existed for the latter, it was always a mess of moving requirements. I’m not sure what’s worse between previous grandiose UML specified software that failed to be implemented after many years and millions spent, or the current state of affairs where at least software projects move on.


This is still squarely on the developers. Slow and steady might be your preference and the best way to make some software or parts of software but that’s not universally true. A lot of creative domains really benefit from fast iteration times to experiment.

It might not be your ideal environment to be making products but it is our job to square the circle otherwise we’re not going to have jobs. For a concrete example look at the way game engine development happens. By and large engine development is slow and steady. Whilst making a game is something that massively benefits from blazing iteration speeds. Having that split between technology and product means you can do both.

Of course we’re also dealing with messy reality so things are faster or slower than each team would like at times.


> it's all driven by a product mentality that favors rapid iterations and technical debt to run business experiments on customers.

Which is in turn driven by a venture capital industry that cares only about massive growth and nothing else.


It's common outside the startup world, too. I think it's more to do with a disinclination to touch anything "legacy". New and shiny good, otherwise bad.


“Legacy” both works and delivers value. It also doesn’t get political visibility by making something new as a manager. Optimizing the costs of existing business lines is essentially a cost center career track in itself and is why IT people get screwed a lot and those with 90%+ of the same skills in a software development shop will get a bump in pay.


Yup. We have an in-house MDM solution at work that is mostly composed of stored procedures, SSIS and small C# desktop apps. It could do with some improvements, but overall it works like a charm.

But we're rewriting it to use Snowflake and Kafka. With no real use case for the rewrite.


There's two kinds of legacy, though.

The proven, you talk about, and the tech-debt.


Depends what you mean by legacy.

If you mean well written c or Java code, that is well tested and clean. Then I'd prefer to then a modern JavaScript application without tests.

Legacy to me means applications that has turned into a mess with no tests. This can be a recent application or old.


Exactly. BLAS isn't legacy to me, but that web server you wrote in an Excel macro is.


Feedback from our HR department indicates that we struggle to retain younger employees in large part because they don't want to use legacy systems.


I think this is a huge problem, but as tempting as it is to blame the situation on "the youths" and their lack of respect for their elders or legacy code, a huge part of this is the fault of our industry and the businesses themselves.

The first thing is that I think people who don't want to work on legacy systems are making a pretty rational system based on the state of the industry these days. We recruit people heavily based on whatever they've done most recently, and for a lot of people even a short stint on a legacy system can be a career ending dead-end. In very competitive markets it might not be quite as bad, but for a large swath of the market (for instance in the Midwestern US, where I'm at), a single job with a legacy technology can make you nigh unemployable doing anything else. The problem is compounded by the fact that a lot of "recent legacy" technologies simply don't pay as well as either very legacy technologies (where you might be expected to have 20+ years of experience) or hotter new technologies.

Next, I think that we need to consider that, frankly, a lot of legacy code and technology sucks. New tools might have the same, or more, underlying fundamental issues, but in a lot of cases newer technology has a thicker layer of ergonomics on top, and the technologies themselves haven't ripened enough for the code smells to be noticeable, so from a day-to-day job enjoyment standpoint that somewhat lower paying job that is going to pidgeon hole you into the same boring BigCo work for the rest of your career is also going to come with a side dish of increased frustration. There's also a particular place in hell for certain types of legacy systems that really embraced the fads of their time- e.g. inheritance astronaut OOP hellscape Java applications from the late 90's, or rube-goldbergian metaprogramming funhouse ruby projects from the 2010's (I suspect the particular flavor of grotesque FP-look-alike cargo cult nonsense non-FP languages are trying to shoehorn onto poorly typed highly mutable environments is the fad that will give next decade's legacy system maintainers night sweats, but it's always hard to tell when you're in the middle of it).

Finally, and I think this is the biggest factor of all, a lot of companies with big legacy systems are digging their own graves because it's not just that they have legacy systems that need to be maintained, but they are so traumatized by the difficulty of keeping those systems up and running that they have calcified beyond any ability to experiment and try anything better, so it's not just that there are legacy systems to be maintained, all new work ends up getting shoehorned into the same legacy languages, legacy frameworks, and legacy business processes, because the fact that they never managed to migrate off their 1970's COBOL applications means that everything written today has to be designed to last 40 years using 20 year old technology. If engineers who were expected to maintain the legacy systems had some leeway to also experiment with newer technologies, and to modernize some parts of the system (with the acknowledgement that not all experiments would be a success), then it might be a lot easier to keep those engineers around.


To be clear, I'm not talking about retaining engineers. I mean that we struggle to retain end-users using legacy POS systems for example, and ERP systems with terminal or old WinForms interfaces.


Also, if it's maintaining a big legacy system, you have to ask: who did the previous dev leave?

The answer might be, because it got so difficult maintain, it was no longer worth it. And if the guy with experience of the system found it no longer worth it, you with little are unlikely to find it worth it. Beware the system abandoned by the long term maintainer, and then maintained after that by a string of short-term new hires. At best, the corp keeps increasing pay for every new hire in the hope they stay.

> had some leeway to also experiment

The idea behind the original "long-term professional" was a little autonomy for this kind of thing. Sadly, most corps refuse to classify developers as anything more important than cost centres, commodity resources, etc. Increasingly, they are contract hires, or easily-replaced temps (Accenture etc) and stuff like Agile abstractions, non-silo-ing, overstrict business time-management / task "sign-off"; and the trend of offering new hires better terms during negotiation, than improved terms to existing hires to keep them happy thereby encouraging no more than 2-3 years at any company to best improve your salary.


I think you hit the nail on the head explaining the dynamic at play here.


Yes, a hundred times yes.

And it's not just young people. In the past I worked at a University where most teachers were unable (or refused to) to use the 30-year-old COBOL system. Those were people from late-twenties to late-sixty year olds, and they all hated it.

We needed extra staff just to type grades in the computer, since it was a long and onerous process, and that extra staff was very hard to retain.

The solution was redoing it in web and after that every teacher happily started doing it by themselves.



Which recently had an interesting discussion on HN: https://news.ycombinator.com/item?id=21208947


It's common outside the software world too.

I work on aircraft design. We have people supporting legacy aircraft that were designed up to 50 years ago. I (and a lot of engineers) actively avoid those positions, and try to stick to programs that are less than 10-20 years old.

It's old technology, you don't get investment to improve anything, you don't get to work with modern tools (go back to working with scanned drawings instead of CAD data for example)…


Something like four years ago I wrote a command-line tool in Rust to interact with a service I wrote (not in Rust). Then I left that company for a new job.

At a recent happy hour, I learned that despite nobody there having much of a clue about Rust, they were able to clone the repo, build it, and run it based on a short wiki page I left behind. I thought that was really cool, and a great validation of the care the Rust project takes around dependency management and backwards compatibility.


One of the more attractive things about Rust for me is that I can easily write portable code. I distribute Windows binaries for a certain Rust project and they've kept working despite the fact I haven't tested on Windows in years.

I'm not sure I would even know how to build any of my C/C++ projects on Windows.


The most important layer in dependency management, I've found, is always in the systems code and what resources it consumes. If you get a handle on that, the rest is addressed with a well-targeted interface to the systems code, which the application uses uniformly. This interface can even take the form of a transpiled language.

And to the degree that Rust is effective, it lets a greater portion of the systems code be managed by a common interface. It remains as painful as most when it comes to binding external C code.

(I do think Zig is really promising in this regard by aiming to absorb C more comprehensively.)


> they've kept working despite the fact I haven't tested on Windows in years.

How do you know? :)

Cross-compilation, avoiding the need to build on many different systems, is great - but I’d still want to test on all those systems.


As a Windows user, I see the other side of this, and it is usually true. It is easy to run in CI, maybe that’s part of it.

I’m not sure exactly how it’s the case, but I’ve been thinking about it a lot lately; I had a convo on HN last week about this...


It’s not hard to cross compile for windows with mingw. As long as you’re not using something like gtk.


It's not just building, it's also stuff like path handling (think of the wrapper everyone writes around _wfopen), argv expansion, etc.

Also the face that I could compile it on Linux doesn't help me document how people on Windows are supposed to compile it.


I find it interesting, regardless of language, that this was seen as out of the ordinary for your company in this day and age. Tells me a lot.


Oh yeah? What knowledge have you walked away from my context-free anecdote newly in possession of?


That it's out of the ordinary for someone to be able to just pick up a repo/service/project at your company and just go with maybe only a few sentences in a wiki to bootstrap them.

I'm a big fan of git clone, make setup, make style workflow where there is no other work. That this was a wow factor for your coworkers, again, tells me a lot.


Well, this actually seems to be pretty common at smaller companies (that they don't prioritize portability and out-of-the-box usability for internal tools and services), and this wasn't even nominally a software company. I did a lot of work there to try and improve these problems, mostly having nothing to do with Rust, but it was a thankless job.

My observation working at (much) larger, software-oriented companies is that things are better mostly because they have a lot of people working on tools like build systems and commit-blocking linters. Individual engineers on large teams, when incentivized toward pure velocity, still seem to ignore code quality and stewardship to the greatest degree they can get away with.

edit: These are meant as generalizations. Obviously some of us care, despite the industry's best attempts to beat it out of us.


You know cmake exists because "make && make install" by isn't usually that easy, anything beyond a small CLI has a lot of dependencies, and breaks with minor version changes?


I do know that. I don't know why you have to configure anything. For configurable project we make projects that contain the specific configs and wrap the common cmake. For instance "windows phone" wrapping the base app.

This also makes me ponder how you can do CI (builds) if you have to manually configure things. Or do you bake all that into the build config


That's funny, I also have a Rust CLI I left behind at my previous company which interacts with a service not in Rust. I just checked and it seems like it's still going.


Sounds like both of us found the same excuse to write some Rust at work. ;)


I started a new project in Go this year and I have the same hope for the future. So far Go has been pretty good in that regard.


This article does not inspire confidence in the matter (in particular the part covering the breaking change introduced in time while one could not decide on package version) but one would expect things to improve with time: https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...


Why do you have that hope? Did you copy your dependencies (I assume you have) in a vendor directory you include In your project? If not, what happens if someone decides to remove the code of your dependency from e.g. GitHub?


It should still be on GOPROXY, but yeah, if (very) long-term usability is important and the project isn't being maintained then vendoring is better.


I am reminded of the profession "programmer–archaeologist" in Vinge's A Deepness in the Sky. In roughly 2010 I reminded my grand-boss that a great deal of the important business process we had consisted of largely uncommented C and various mismashes of shell scripts.

It's true, there's something about all of the frameworks and the fads and such which causes so many projects to need, well, continual attendance to the dependencies. One proposed platform at a previous job bothered me for reasons I could not explain, so I mapped out the major dependent technologies, pointed out that we had nobody who had even a passing familiar with with them, and also pointed out how quickly some of them were moving.

There's something to be said for keeping your tower of dependencies short. It's a tradeoff, to be sure.


There's something to be said for keeping your tower of dependencies short.

Absolutely, but you still need to present an interface. Essentially, which interface you present (REST, command line, language-specific library, VM/container, etc.) has in most organizations become a political question, to which implementation details are largely held hostage.


I absolutely love the vision of software technology in the Fire/Deepness books. The "zones of thought" concept dovetails nicely and lets him expand on those ideas, too. Highly recommended reading (for that and a bunch more reasons).


I have put off reading this book for so long. Now I have the reason to pick it up!


Those books are so awesome, the zones of thought really stuck with me, I use that metaphor almost weekly when building mental models. Also the idea of a code archeologist really got me to dig into the book Working Effectively With Legacy Code, and now I'm actually pretty good at it and really enjoy it.


Perhaps more like historical anthropologist, if we're honest.


The book is from 1999, but I kinda feel like we're already there - turns out that mere decades are quite sufficient.

"Pham Nuwen spent years learning to program/explore. Programming went back to the beginning of time. It was a little like the midden out back of his father’s castle. Where the creek had worn that away, ten meters down, there were the crumpled hulks of machines—flying machines, the peasants said — from the great days of Canberra’s original colonial era. But the castle midden was clean and fresh compared to what lay within the Reprise’s local net. There were programs here that had been written five thousand years ago, before Humankind ever left Earth. The wonder of it — the horror of it, Sura said — was that unlike the useless wrecks of Canberra’s past, these programs still worked! And via a million million circuitous threads of inheritance, many of the oldest programs still ran in the bowels of the Qeng Ho system.

Take the Traders’ method of timekeeping. The frame corrections were incredibly complex — and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely… the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems.

So behind all the top-level interfaces was layer under layer of support. Some of that software had been designed for wildly different situations. Every so often, the inconsistencies caused fatal accidents. Despite the romance of spaceflight, the most common accidents were simply caused by ancient, misused programs finally getting their revenge.

“We should rewrite it all,” said Pham.

“It’s been done,” said Sura, not looking up. She was preparing to go off-Watch, and had spent the last four days trying to root a problem out of the coldsleep automation.

“It’s been tried,” corrected Bret, just back from the freezers. “But even the top levels of fleet system code are enormous. You and a thousand of your friends would have to work for a century or so to reproduce it.” Trinli grinned evilly. “And guess what — even if you did, by the time you finished, you’d have your own set of inconsistencies. And you still wouldn’t be consistent with all the applications that might be needed now and then.”

Sura gave up on her debugging for the moment. “The word for all this is ‘mature programming environment.’ Basically, when hardware performance has been pushed to its final limit, and programmmers have had several centuries to code, you reach a point where there is far more significant code than can be rationalized. The best you can do is understand the overall layering, and know how to search for the oddball tool that may come in handy ...”"


I've found that the further back in time you go, the harder it is to run programs without encountering odd bugs, until you hit a certain inflection point where the systems are so old that they're relatively simple to emulate, and for popular systems we often have high-quality emulators that are capable of running the software reasonably well. For example, try running Diablo 2 (released in 2000!) on modern Windows, or modern macOS: it's not straightforward or bug-free, in my experience. It's reasonably good on Wine, ironically — because the Wine developers have built a really good emulation layer.

I don't think it's (application) developer fads; how would that even make sense? Either the OS broke userland, or it didn't; if it didn't, then the same binary should run just as well today as it did then. If it did break userland, that's not the fault of application or framework developers.


I agree to a point, but there are harsh realities to this.

To name just a few off the top of my head, where there is no choice in whether the userland changes or not:

- Cipher suites and crypto protocols changing often end up "cutting off" older applications because they used an embedded SSL library that can't do TLS 1.2. This is required because vulnerabilities are discovered in older protocols and they must be turned off.

- Changes in the CPU instruction sets. For example, it is not possible to run 16-bit Windows binaries in a 64-bit host OS, but there are no 32-bit supported versions of Windows. Even if the majority of the code is 32-bit, a tiny 16-bit component can be a showstopper. I had to support a large (20K concurrent user!) environment that ran a 16-bit DOS application. On Windows 2008 32-bit. In the year 2015. Really.

- I've seen apps that just can't handle high performance CPUs or multiple hardware cores, simply because they were written in an 100MHz single-core era. I've seen all sorts of timing issues, deadlocks, etc. For example, one particular release of World of Warcraft simply couldn't patch the game on 4-core CPUs, because one task would race ahead, time out, and then deadlock the downloader utility. That persisted for about a year, because Blizzard thought it was "okay" to support only 99% of their userbase that had 2-core processors that were common at the time.

- If you app has any external dependencies of any kind, its days are numbered. License servers. Update checkers. Content providers. Cloud services of any kind. Whatever. It's going to be turned off eventually, and the app will stop dead.


> there are no 32-bit supported versions of Windows

Windows 10 has a supported 32 bit version

EDIT: It support 16 bit apps too: https://www.groovypost.com/howto/enable-16-bit-application-s...


But there's no server version with 32-bit support, but this application required server components not in the desktop edition.


Lack of 16-bit DOS support in Win64 is a choice itself. It's not technically impossible, and there are third-party implementations of a 64-bit NTVDM (e.g. http://www.columbia.edu/~em36/ntvdmx64.html).


> Either the OS broke userland, or it didn't; if it didn't, then the same binary should run just as well today as it did then.

I think you're oversimplifing things here. There are many reasons why programs did run on the old OS, but no longer work on the new that are not "new OS broke userland". Some of these boil down to inaccurate or short-term assumptions of the developer. Not exactly userland, but this story [1] why old Windows versions crash on fast processors is an especially entertaining read.

The sibling comment by jiggawatts provides good examples.

[1] http://www.os2museum.com/wp/those-win9x-crashes-on-fast-mach...


One of my favorite quotes from The Prestige (2006):

Alfred Borden : I had a similar trick in my act, and, uh... I used a double.

Gerald Root: Mm, I see, very good.

Borden: Well, it was, and then it went bad. What I didn't count on was that, when I incorporated this bloke into my act, he had complete power over me.

Root : Complete power, you say?

------------

I think about this quote often when I think if dependencies.


As an aside, I have seen magicians arguing that the teleportation effect presented in the prestige would be terrible to do on stage because there is only one logical explanation (you used a double) and thus no mystery.

Paradoxically, the fact that it is the only explanation is the thing that makes it a great effect in the Prestige...


There are old games that don't need emulation to run fine on modern OSes. Age of Wonders (1999) would be one example.

For old software that's not games, it's actually more common than not, because the OS really didn't break the userland contract all that much - what it broke is various undocumented assumptions that software was relying on. Games tended to be the kind of software that did it the most, usually in the name of performance (and sometimes also DRM).

Although if you unwind to before Windows XP, you also have to consider the Win9x/NT divide. Most games of that era were written for Win9x, and often relied on the liberties offered by the very lax process and memory management model. The transition from 9x to NT - which for most consumers happened via XP - was definitely a big userland breaking change for anything that didn't care about NT compatibility before.

(That's why the sweet spot for back-compat is circa 2005 - XP was already well-established, so all new software was written with it in mind - but technologies used were still of the older variety that doesn't drastically change every two years.)


Problems linked to this:

- Developers wanting the last shinny programming language instead of mastering one.

- Developers wanting the last shinny framework instead of sticking to one and learning it inside out.

- Developers wanting to start projects from scratch instead of learning to refactor. The new code is full of tech debt half a year later and some developers already want to move on to some new code because if you don't know how to refactor and evolve architecture your new shinny product lasts clean a few months.

- Developers that care more in the hiring process that you know the latest version of the framework/platform/library but do not understand algorithms, data structures and other basic stuff. They think that functional programming is something new that starter 2 years ago, because it is the first time they heard about it.

In my experience business pushing for new features is only half the problem, approaching development as fashion instead of engineering is the other half.


Very much agree (friendly tip it's 'shiny' with just one 'n' btw)


   > Developers wanting the last shinny programming
   > language instead of mastering one.
Well, how about "developer wanting it all to be Java(Type)Script"?


The rich irony is that the problem is probably due to our reluctance to throw-away old code and rewrite better software with improved domain understanding. But that requires thoughtful and disciplined stewardship. Instead we recklessly patch leaky abstractions on top of each other because we’re too lazy to work from scratch and are focused on “shipping” instead (cue: Titanic).

We are so afraid of reinventing the wheel and actually having to understand something that we will happily refactor out a car wheel and use it for a bicycle (hey, “code reuse”!) and play the charade of “best practices”.

Software is one context where we don’t have to worry about “disposing off” waste; we ought to make much better use of that.


Actually I feel it is the opposite of that. Existing code can be battle tested and reliable, with lots of unforeseen bugs encountered and fixed.

But those bugfixes made the old code difficult to work with, and rather than take the time to understand it, new code is developed instead with a whole new set of bugs (many having been solved previously). Working with old code is unexciting and unsexy anyway, and it is unlikely to advance your career. And so the cycle continues.


The contrast between the two perspectives depends on the relative emphasis between human understanding of the domain knowledge and accumulated domain knowledge in pre-existing code.

Biased by the domains I have worked in, I feel that we undervalue the human ability to figure things out and create good solutions, and overvalue code that is available for “free” even though it addresses a problem only tangentially related to what one is doing. My comment pertains mostly to such situations. In situations where the opposite is true (the domain in mostly static, and the old code is a good fit, and we don’t trust human intelligence to figure things out and solve problems), “stable” code is certainly valuable.


I agree with both sides to a degree. The tendency to use monster truck wheels on a shopping cart or vice-versa (because reinventing the wheel is considered evil) makes software a mess, but so does throwing out years of debugging subtle/rare conditions.

Code reuse however also tends to add tons of bloat and potentially bugs caused by things that aren't even relevant to the current usage. Not to mention all the added complexity of having to add another layer over to translate the current terms into the terms the re-used code understands and then override what it does and translate the output.

There are some cases where it would be preferable to preserve the embedded domain knowledge, but in many cases the cost of 'reinventing the wheel' would be less than adapting an unfit wheel.


That's fair. I come from a scientific computing background, where old code is sometimes treasured, even though there is increasing pressure to "modernize".

But the older code often has several decades worth of testing across many use cases, which is hard to beat.


"You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."

When Joe Armstrong said this he was talking about object-oriented programming, but it applies to dependency hell as well.


Pretty much any electron app you download is probably tied to a backend and will fail if their servers go offline. It is kinda sad. Electron apps are usable as long as the company is really iterating forever.

Is the real problem that we're overemphasizing UI and React and UI frameworks so much that we're now just really focused on building really pretty apps that have little or no long term usefulness? At this point it feels like full Companies are just Apps and those companies make something that fills exactly one need, like renting a broom from your neighbor. And it's really pretty and they had the best lawyers in the world thinking about all the potential legal nightmares of neighbors renting brooms that it really is a great app, should you ever need to rent your neighbor's broom. Or you can walk over to your neighbor and borrow it.


Wait, what??

Electron apps contain a local web server. (That's part of the reason they're such resource hogs.) In principle at least they work offline. Github's desktop app, for example, works fine on a disconnected machine. You obviously cannot push or pull, but you can do local git operations with the GUI.


I think GP was complaining about "most apps that are built with Electron", not Electron itself.

The complaint supplies equally well to most mobile apps today.


Electron apps contain an entire headless browser. How anyone ever thought this was a good idea blows my mind.


I actually think it would be a great idea iff they would use a webview, which every OS provides nowadays. Apparently I'm not the first person who's thought of this.

https://blog.stevensanderson.com/2019/11/01/exploring-lighte...


That strategy of "web applications on the desktop" was actually pioneered over 20 years ago by Microsoft:

https://en.wikipedia.org/wiki/HTML_Application

Even in the Windows world, they were never all that popular.


HTAs were not cross-platform; Electron is. Furthermore HTML5 and JS are much more capable now than they were prior to Windows Vista.

That's why Electron is popular and HTA wasn't. But Electron is still the worst possible example of the resource inefficiency of static linking.


Great article. I am not keen on relying on Edge for Windows, but that solution is a hell of a lot better than Electron.


From size considerations, Electron apps can be more precisely described as a browser containing exactly one web app that it always runs.


How closely can we can link the rise of SaaS (and corresponding disposable frameworks) to an effort to find a stable revenue model in a world which had problematic levels of software piracy? I remember when I was a teenager, you might be looked down upon if you obtained your copy of Windows XP legally.

There are a lot of posts like this that bemoan the current state of [insert social ill] but don't care to explore what structural forces contributed to this state.


Idk, it’s gonna be all anecdotes but, when I was a teenager I pirated software heavily too, but then when I finished high school I paid for an academic license for Adobe Creative Suite 4 Master Collection for my own personal use for example, as well as a few other pieces of software prior to that. And mind you, there was no cloud stuff that forced me to buy it, I just chose to. And I’ve bought a bunch of other software too since then. As well as many games both before and after.

I think more so what made me stop pirating software altogether, and instead either paying for it, or using FOSS alternatives, was in part the realization that it was going to be a bit silly for me to want to get into software development and wanting to get paid for my own software, while at the same time continuing to pirate software. So that is the main reason I stopped pirating software I think. That and the fact that I learned about a lot of FOSS when I started at the university.


Actually a very salient point.

Still, I don't like the reality because it obscures the bad parts of where we are.

I've seen a slew of software licencing models in my travels. I hate where we have wound up.

Some of my more-appreciated ones:

- Not-Crappy-Hardware-Lock: - When I was in the RF/HFC Drafting industry, a very entrenched program was utilized for most signal level calculations. This company used a solution that involved a Parallel or USB Device that took a Vendor-provided Dallas one-wire device. You paid X$ for the software, you had updates up to X date with that key. They did have a way to 'upgrade' but I don't remember what that involved as we never did it. The system seemed fair, I'm guessing it was fairly PITA to hack (remember, in this industry you have access to a decent number of people with good signal analysis equipment) but also had a fair cost standpoint. - It was nice that "Who can use it" was just tossing a key around.

- Usage + Audit: - Same industry. Market forces led us to also have a product from a vendor that also competed with the biggest CAD vendor out there. This company essentially had a multi-stage setup; The systems running the software would check in with our locally-installed license server running on a box, license server would periodically check in with vendor's home-base server (uploading our usage data and downloading some response, I'd assume). If there were any issues with our usage the vendor would call us and inquire as to what was up. - If your machine was not going to be able to check in to the license server (i.e. Field use, this was also before mobile hotspots were cheap, and not too many corps had proper VPN setups) you could 'check-out' a license and generate a time-expiring key - They were WAY WAY more than fair. Our usage was balance out based on some algo on their end. IDK quite how it worked, but for example; if you had 3 licenses, didn't use any Monday or Tuesday, 2 on wednesday, but then had 4 people simultaneously using on Thursday/Friday, they didn't care. - When somebody who may have been me left the app up for a month and had a checked out license for a laptop, while we were in crunch mode and others were using the application, we eventually did get a phone call. But even during the conversation they discussed the amount of logged time and it didn't result in any penalties or fees. - We got updates as long as we paid a fairly low maintenance fee. If we stopped paying updates, we only could use up to a certain version. - Seriously, maintainers of the DGN format, you guys were amazing about licensing and just great to deal with.

- Per-Seat licenses: - Arguably the most fair - You know what you're getting - Can do either with a central reg or reporting server - Possibly more balanced than above towards vendor, definitely lower maintenance level.

- Jetbrains style licensing: - Similar to Usage + Audit, but only the usage part - Works well. I dig it. - But seriously, what is up with the awkward 2-factor setup?

- SAAS: - How would you like to be charged today? - Vendor Lock-in - Change Friction - Don't own the hardware - Offloading of liability

Unfortunately, SAAS wins out because it's the best way to make money, especially when one considers that last bullet point. But also, when you consider 'base' charges versus 'overage'; as an industry we might be diving head-first into the same malarkey that we have complained about in places like this, or forums, or even usenet groups; the forced minimum pricing/bundling of services.

What's ironic here (for me,anyway) is that Cloud computing used to be different; you used to just be able to buy a hosted server. Somehow we jumped from that straight to the current model instead of more consumer-friendly concept of burstable VMs.


SAAS is also the easiest to maintain, because at any one time there's (usually) only one version of the software running on one OS & hardware version. And the vendor is running it, so access to logs and core dumps is effortless. It's a big problem getting logs, etc. from customers running on-prem software.


I'm very confused. What does it matter, for running old software, whether a framework was a fad or not? And what does a "brittle" dependency even mean in the context of preserving software?

If you're trying to preserve software, you've got two options.

One, you can preserve the binary and run it with emulation. For webapps, this means preserving a deployable image. Your faddish/brittle dependencies are 100% irrelevant, because your image includes them already, right?

Or two, you can preserve the source code. Which for any project is going to involve a mess of complicated tooling that dates to when the project used to be compiled. But if you're intending to preserve something for the long term, you should preserve the dependencies too -- e.g. node modules or whatever. Or else you're gonna have to track down the packages used on a certain historical date, just like you might have to track down some random specific version of a Borland compiler that ran on a certain version of an OS.

I don't really see what's all that different.


License servers and downloadable content. Roughly 2005 is about the time when these started to become commonplace because developers began to assume always on Internet for applications that didn't have obvious network dependencies. So even if you have installation media, that media may not have everything needed and will likely not have patches for significant bugs that were discovered after it was pressed or the installer was downloaded. I think that's why the author picked 2005 as the cutoff point: prior to then, if you had the installation media / package, you could usually install and run it offline. Likewise, any subsequent updates could usually be downloaded 'outside' of the application and applied manually.


My favourite iOS game from when I was a kid is no longer actively developed and as a result it was removed from the app store because it doesn't work properly on new devices.

Its perfectly understandable why this happens but its a shame there is no way at all to play this game other than finding an old ipad that still has it installed where as old windows games still work fine.


Unrelated...

“My favourite iOS game from when I was a kid”

I’m 40 and maybe this is the first time I feel really old on HN. Cheers!


I'm 32 and even I felt old reading that. :D


I'm just hoping he's referring to an easteregg in a cisco router


I figured someone would think that when I wrote it. For context I'm 21. The ipad was released when I was in primary school and I had the first one.


This is exactly what was going through my mind when I read it too.. how awful


I have a tablet I'm scared to mess with because it has a copy of the Android port of Frozen Synapse, and the game has since disappeared from the Play store.

I paid real money for that game, to boot.


There are apps that let you backup other apps installed on your device.


I love Frozen Synapse. Always interested in the idea of simultaneous turn resolution.


That's the problem of DRMed stores, not a problem of older games per se. For instance, GOG specialize in restoring old games, as long as they are DRM-free. Apple simply never cared about preservation use case.

Don't support DRM with your money if you care about this.


Its not just drm in the way. Even if I had a copy of the app I wouldn't be able to use it because no device I have could run it. Its the fact that apple doesn't give 2 shits about backwards compatibility. Which is not entirely a bad thing but it means games will all eventually be unplayable.


> Its not just drm in the way. Even if I had a copy of the app I wouldn't be able to use it because no device I have could run it.

Depends on the game. For example, a lot of old Windows games run on Linux in Wine pretty well, despite not working even on Windows anymore.


I feel like this has happened to an entire category of games, broadly defined as "Games you can pay for and play".

Those don't exist anymore. The only games that get released now are "Free games that you pay to make less annoying, repeatedly".

There was a point around iOS8 where studios started scrambling to modify their games from the first category into the 2nd. So previously complete and balanced games like Civilization Revolution started getting new wonders and leaders that you could pay for (that the AI gets for free). Fortunately in that case, they released a completely separate Civ Rev 2 that was built with that crap from the ground up so they didn't need to finish ruining the first one.

But they certainly aren't about to release compatibility updates for those old games, so I keep an old iPad Air with a cracked screen around and carefully navigate the upgrade nag screen every single day to keep it safely on iOS8 so that the last handful of good mobile games remain playable.

One dead battery or mis-tap on the upgrade screen, and they'll be gone forever.


As a retro gaming fan I can almost guarantee you that there will (if not already) be a way to play those things.

I too miss iOS games that I can't play anymore, like (most memorably) the Texas Hold 'Em game Apple itself made!


I bet a lot of ios games are gonna slip through the cracks. Take the general lack of giving a damn with regards to archiving their source that game companies have, and multiply that by having the games distributed entirely digitally - and limited space on phones/tablets/etc that means people tend to delete old games that don't get updated.


Also largely the reason we have such complete archives of old console games is because of piracy. Back when they were new games no one cared about copying them and building emulators for archives, they wanted to distribute them and as a result ended up with an archive. Most iOS games are so cheap that piracy is not a concern and there are so many games that your favourite one probably doesn't have a huge fanbase behind it.


As a rebellious young nerd, at some point I realized that my piracy was more akin to/aligned with collecting/preserving content as I consumed just a fraction of it but still derived joy from the acquisition and storage.

Of course, once I got a job as a developer I paid for everything (too much, really... my unplayed Steam collection keeps growing!)

I still love emulation tech, though


I do not share your optimism. The main post says that you can do this for code up until ~2005, but not since then, and I would tend to agree. Also, many of the games require access to servers, which are no longer running, and even if you could get access to the server code, you wouldn't legally be allowed to run one yourself!



HOLY CRAP, WHAT?!?!


You're basically saying that preservation is easy if the original developers took preservation into account. The article is making the point that developers and ecosystems increasingly don't consider preservation ahead of time.

> One, you can preserve the binary and run it with emulation. For webapps, this means preserving a deployable image. Your faddish/brittle dependencies are 100% irrelevant, because your image includes them already, right?

And what if the software's build artifacts aren't binaries or deployable images? Many projects (most, I suspect) don't output either.

> Or two, you can preserve the source code. Which for any project is going to involve a mess of complicated tooling that dates to when the project used to be compiled.

Well sure, but some projects have more mess than others. That's kind of the point of the OP.

> But if you're intending to preserve something for the long term, you should preserve the dependencies too -- e.g. node modules or whatever.

And if you made the decision to preserve the software long after it was initially developed, what then? Most people explicitly don't put their node_modules folder in version control.

> I don't really see what's all that different.

What's arguably different is that many of the currently popular ecosystems don't really value long-term maintainability, stability, or preservation to the same extent as the popular ecosystems of yesteryear did. Projects in these ecosystems can end up broken on a much shorter timeline than in other ecosystems. Project breakage also has a way of causing cascading problems.

I'm not sure why you're confused, you seem to have a good grasp on the solutions to these problems. Perhaps you're unaware that it's common practice not to implement those solutions ahead of time, and that is exactly what the OP is complaining about?


It's not just the source code, it's also the tools to edit/build it, the directory structure, the operating environment (OS, network shares, servers) and more.

Example: You can still buy a copy of Visual Basic 6 on eBay. Does your laptop even have an optical drive to install it? Do you have the patches that were released for it?


If I wanted to preserve software, I would just upload it to the Debian archive. Then it and all of its dependencies down to the bootloader will be preserved in both source form and binaries for multiple architectures for as long as the Debian snapshot archive is around. Also, Software Heritage would archive the source code of it.

https://snapshot.debian.org/ https://softwareheritage.org/


In cloud era we can't preserve all software.


Some people question when I've decided to only use a handful of dependencies in a project - usually "You could do it so much faster if you used X, so why not?"

The problem is that every dependency is a possible source of failure. For most languages, you can do a whole lot if the standard library is well-written - Python 3.6+ is particularly good for this, PHP if you don't mind API strageness, whereas Go without modules is less convenient, and outliers like node.js are basically unusable.

Also, frequently you don't need the popular "everyone uses it" libraries. For python, requests isn't needed anymore, as built-in urllib is quite good now and lacks a lot of "smart" functionality that causes strange breakage. pytest has more rough edges than built-in unittest. The only things that I use regularly that legitimately add functionality are pyyaml and tox (only when testing!).


Funny that you should call node.js an outlier when it's all the rage these days. Node does, however, go to show that short-term convenience trumps reliability and long-term stability by a wide margin.


I wonder if the comment was intended to be mainly about the newer mobile OSes. I don't think I'd have too much trouble finding a way to run 10-year-old Windows or Linux desktop apps, and it would be easier than running Mac apps from the 1990s, but I didn't even get the opportunity to save copies of the iOS binaries I used 10 years ago. Even if I did, I doubt I'd be able to do anything with them.


they're talking about web apps and mostly javascript.


Doesn't make much sense then. Your browser will run those.


I have seen web apps that simply can't be deployed anymore because their ruby version includes a rubygems version so old it can't use TLS 1.2. Its probably a good thing and will force upgrades eventually but old web apps can't always be used.


mod_perl 1. Oy.

Never say you can't rewrite a project. I've done it too many times.


heh


I sure hope not. At my last gig we kept a suite of jQuery, Backbone, ko, and/or Angular1 apps happily chugging away with almost no extra support burden.


How did you keep your developers' careers chugging away? Did "oh, we only used jQuery and Angular1" go over well in the interview?


It goes over great with talented people who are tired of relearning the same things over and over again. The ones that need to "chug" every 2 years can go somewhere else.


Someone has to keep the legacy code running. But seriously though, talented people are attracted to career progression, exciting projects, and money. Those three things do not all coalesce on Angular1/jQuery apps anymore. It just does not happen.


Right, the apps we're actively working on are React+TypeScript+GraphQL. The Angular1 and older apps are the ones that require minimal investment.


If you’d never developed an app/system for more than a few years, I would be thinking of hiring you as a junior engineer.

If you were willingly involved with rewriting your app/system every few years in the latest trendy language or framework, I probably wouldn’t be thinking of hiring you.


Funny I upgraded to ubuntu 20 from 18 last night. Ruby sass wouldn't work anymore. Why? Well just read what the maintainer said...

"When Natalie and Hampton first created Sass in 2006, Ruby was the language at the cutting edge of web development, the basis of their already-successful Haml templating language, and the language they used most in their day-to-day work."

The page continues...

"Since then, Node.js has become ubiquitous for frontend tooling while Ruby has faded into the background."

https://sass-lang.com/ruby-sass

So I installed the nodejs version. And in a few years I'll move over to the next fad....


Ruby, Python, and to some extent Node have always had problematic dependencies on system libraries.

I can't count the number of apps in these languages where I've had to fiddle with downloading updating, compiling libraries.

Java and C# are great about this. Download it, run some command, and you're good to go 99% of the time.


14 years is a long run


Fair point.


consider not using ubuntu


What would you suggest as a replacement?


arch linux, artix linux


and why more importantly


umm...why? Its worked well for me, over 10 years now.


I worked on my previous project for 22 years yes 22 not a typo. It changed a lot over the years but there was still code that was recognizable from the early days. At my new job I work on projects that are designed for minimally viable products. Basically throwaway.


As commercial software developers, our job is to write profitable code.

Not bug-free, elegant, perfect code. Not buggy, unusable, dependency-riddled unmaintainable code either.

It's an art getting that balance right. Not everyone does.


Profitable, yes, but considering what time scale? Profitable for six months? Profitable for two years? Five years? Ten to twenty?


Ideally from day 1, the sooner you have the option for profitability, whether or not the company decides to take profit or reinvest, the better your chances of not wasting a ton of money. Even if the actual tech takes a year or two to develop, if you have a paying customer from day 1, the better your chances should be.


Depends on the business.

For a bootstrapping startup, yeah, day 1. It has to be. Otherwise you don't eat.

For an Enterprise business with 6 or 7 figure revenues, a lot longer. Possibly decades. If it gets political, never ;)


I also feel old fashioned imperative programming is much easier to debug than modern event-driven frameworks.

Technically unit tests should prove a system is modular and testable but I just keep seeing huge dumpster fires.

I'm not sure if I'm just old or it really is a problem.


When I started out of college as a SWE at HBO in 2009, there was a ton of code written in like 1999-2002 still in use. It was cool to see well-written and documented code live on in real perpetuity. Also, most of the engineers who wrote that code were still there! I think things have both changed a lot in the last ten years (power of iteration), and also things in a stable, profitable company are just a lot different than in a startup.


I mostly use Windows. Most software written after 2005 just works.

Not everything, e.g. I have a flatbed scanner in attic which requires me to fire a WinXP virtual machine (I use it couple times a year), because there’s no drivers for Win10. Many low-level OS-related utilities are broken as well.

However, majority of software works. For lulz, I once to tried to run a binary of pinball from Windows NT4, worked fine on modern Win10.


I don't know... Can I run Netscape Navigator on Ubuntu 20.04? How about Gtk 1.2 programs or old Qt3 programs? Can I run Opera 12.x?


It's easily doable by running a virtual machine. If you have an OS compatible, it will run out of the box. Now try to do that with one app that depends on one of the cutting edge development stacks of say, 2011... Good luck telling me how you can run it as easily as I can tell you about the last version of Netscape for instance - just run a Win98 VM and voila, you're set.


cleaning up unicorn poop has been an excellent business. Nice to see it will continue for awhile.


My buddy has built an entire career hopping to late stage start up after late stage startup and refactoring rails into java


>> and refactoring rails into java

Oooh planning ahead for round 2.


And then the next person refactors the java in to rails and the cycle continues.


I think some technologies, specially the ones related to web, cause this fatigue more than others.

It's probably because web devs build and switch stacks at a higher velocity. I'm in my mid-twenties and have seen 4 monumental changes in frontend development alone.


For an interesting vision of how software could potentially be constructed to give users more agency/ownership and longevity without ditching the benefits of SaaS, see Kleppmann et al's "Local-First Software: You Own Your Data, in Spite of The Cloud" [1]. From a technical standpoint, anyway, what they don't really get into is how subscriptions are just too damn attractive as a business model. If anyone has interesting approaches on that front, I'd love to learn more!

- https://www.inkandswitch.com/local-first.html


Submitted and discussed at length a few weeks ago:

https://news.ycombinator.com/item?id=24027663


One comment states:

>Given full blueprints, you can reasonably rebuild any machine from 19-th century from scratch, no matter how complicated. Even if you start from raw materials.

Wich is totaly untrue, not the blueprint was important but the handcraft knowledge, and that is often forgotten. Whe even had to relearn many details how appolo landed on the moon because of lost knowledge, and that thing was pretty good documented.

Just one of many examples:

https://www.sciencealert.com/why-2-000-year-old-roman-concre...


It's because software design is guided by individualized, balkanized, competitive, hierarchical human incentive, rather than public incentive. The extraneous branching and churning of what amounts to the formal, ever-evolving definition of all human interaction and knowledge: one might define modern computing as this. To accomplish this branches must be deleted over time and unified in order to properly abstract and make infinitely useful the definition. Branches are useful in uncertainty, in areas where discoveries are still being made. Branches become impediments in settled areas and Babel-tower formation begins, increasing brittleness. It's the human-knowledge equivalent of Keynesian hole-filling, but the price one pays for this is software duplication, increasing computational complexity, and increasingly less interoperable systems for like things. Sometimes lightning strikes and we have to build all the rooms again, but that happens at the bottom of the tower up. I think the role of the State should be to unify these conditions over time, so as not to create extraneous noise, confusion, human-churn and suffering, and material resource waste.


People are using lots of external dependencies (which expire over time) for a reason. They allow a single software developer to build a social network of very high quality, or hardware that would have taken a whole department to build 20 years ago. The pace is so fast that the system vendors can’t catch up providing all this functionality in their native frameworks, which IMO is the primary problem here.


https://github.com/nixos/nixpkgs/ Everything, I care about, fully reproducible.

I don't mean to "hawk my wares" in what feels like many threads, but seriously, this is a solved problem, despite the continued mess-making of people that don't use it.


2013-2019 had a lot of "re-discovering the wheel" - especially wrt programming languages and frameworks(read: JS)


I blame the consumers. They're being lazy and shortsighted. Computer programs where half of it runs on some remote server and the graphical frontend is a home page running in a WWW browser is plain stupid for all reasons except "easy to update" and "easy to control".

I absolutely refuse to use "web applications" for anything productive. I run my computer programs on my computer, and if they don't work without an Internet connection, I'm not using them either.

My Nintendo Wii has never been online, it's only been used with physical media, so I know it will continue to work as long as the hardware does, and when that no longer works, the software on those DVDs still work. The same goes for my Playstation 1,2 and 3, and my xbox and xbox 360.


How the blame can be on the consumer? They don’t even have a clue what a front end is. Just look at Adobe customers to see if they are happy with the cloud for their apps.


I blame the consumer because we all vote with our money. Sure, technical proficiency in mainstream users has probably fallen somewhat since the 80s, but people should care about the huge difference between buying a product for their computer, or merely renting access to a product. They should understand that they're giving up the choice to stay with a version that works for them and their computer, one that won't force them to upgrade hardware or other software to keep going.


Just wait until all these SPA's stop working without constant maintenance.

Data: Always readable, assuming it sticks to some discernible format.

Data generated by code: Inherits both disadvantages; the changing of data formats AND the ephemeral nature of old code.


I expect there somewhere someday there will be a company that focuses on just buying failing SaaS software to have their code as an asset quickstart to a real market solution .

kind of like buying a bankrupt generator company as the basis of starting a car company. You need a liquid fuel engine design anyways...

(Edit dont get pedantic about the analogy's shortcomings)


Perhaps the issue is that it is far cheaper and easier to start from scratch and create a small app these days.

As we innovate exponentially faster and have better tools and languages, it is far easier to tear the building down and start over.

Also, I don’t know of a single app I use regularly that is older than a few years.


> I don’t know of a single app I use regularly that is older than a few years.

What did you use to visit this website? Check your email? Open a spreadsheet last? If your a software developer, what do you edit code in? Your operating system?

Hard to believe you're using apps less than a few years old for any of those much less all of them - or even those that are by name older but have been through a Big Rewrite in the past few years.


Firefox - July 2020

Mutt - July 2020

Gnumeric - August 2020

Vim - December 2019

OS - Ubuntu, April 2020

They might not have had big rewrites, but they are still under active development (I actually run slightly older versions, but nothing older than 5 years)

The code I use which isn't under active development is stuff I wrote years ago, but showing its age -- one uses flash to deliver video for example. Last update was a specific view for a blackberry, which gives an indication on the age. That view didn't support video as we didn't have 3G in the fleet.


Your web browser is all i can think of.


I don’t see why this is a problem, I actually think it’s a feature. Do you really want to be using 20 year old software? I don’t. Furthermore, not being able to run old code has never impacted me in a meaningful way and I’m part of the small minority of people who write software for a living. I recon if you go ask pretty much anyone out on the street about this “problem”, they wouldn’t even care.

I think it is fair to say that unchanging software systems can be harmful. The unemployment benefits system in the United States is a good example. They built those systems in the 70s and never touched them again. Now that the requirements have changed due to Covid induced economic collapse and the systems are overloaded, there isn’t a single person who has a damn idea how to fix it. If there were people iterating on and improving the system this entire time, things would be much easier to fix, or maybe there wouldn’t have been a problem at all.

Data is forever, not code. Code should be fickle. Changing code is progress


> Do you really want to be using 20 year old software?

Yes. When a piece of software is working the way I want, I don't want to change it. I want to keep it indefinitely. Aren't there games from 20 years ago you'd like to play again?

When a developer makes improvements, of course I want those updates. Keep me up to date automatically. But let me opt-out. Let me revert. Here's my dream: I'd be able to rewind my whole operating system back 10, 20, 30 years and be able to run and use everything from back then, exactly how it was.


Data in older programs was in legacy formats that often could only be read by that program (and sometimes only specific versions of it). One might assume that since we're using a lot more plaintext formats, our formats and encodings today will last forever, but I wouldn't be too sure of that. Better than it used to be, but it would be hubris to assume that we've hit the pinnacle of perfection of the best possible formats.

And yes, I really want to be using 20 year old software. I have software dating back to some originally written in the 1960s, updated in the 1970s, ported to the PC during the 1980s-early 1990s, which still runs fine today. Some of the programs require an emulator or virtual machine, but they still run fine. With a fair amount of confidence, I can modify and recompile the source code and it will still run. (I did this recently to add JSON output to a program written in 1989-1990.)

The problem is dependencies. Originally they were compiled in or included. But then people got all wonky about code re-use and shared libraries led us into DLL hell. Then we had things like OCX and VBX components that were shared and then vanished. Much of that is patched-over now to avoid some of those problems, and for things from that era, a VM typically runs them.

But more recently dependencies are much greater, and changing much faster, and much more often rely on network connections to specific endpoints which may not be there tomorrow. You can fairly readily pack some DLLs into a VM and run stuff from there consistently, but you can't pack a working snapshot of the entire internet into it.

That's aside from how much more complex the build systems and their dependencies have become. I know for a fact that I could still easily build and run code I wrote >15 years ago today, but not code that I wrote 10 years ago. Not all of the things necessary to build it are still available.

Admittedly systems like unemployment that are in daily use for business purposes probably should be updated. But if I want to play the original Fallout, released in 1997, I should still be able to. And in fact I can. Newer games might not last 5 years. If I want to go back and look at/adjust some vector images I made in the 1990s-2000s, I can do that in a VM, using the source files in the software used at the time, and generate new output files. With today's software, I won't be able to in the future.


Changing code may be progress. Replacing working code with code that works less well is definitely not progress, though.

The problem is that working old code often has far more corner cases and bug fixes in it than you think. It's easy to "replace" it with something that is buzzword-compliant and that uses the new shiny, but that doesn't cover all the cases and doesn't have all the needed bug fixes. That's not a step forward.


> It's easy to "replace" it with something that is buzzword-compliant and that uses the new shiny, but that doesn't cover all the cases and doesn't have all the needed bug fixes. That's not a step forward.

This is a false equivalency imo. If you build a replacement system that is buggy and sucks, you did a bad job, it has nothing to do with the tools that you are using. This seems like more of a criticism of management to me


> If you build a replacement system that is buggy and sucks, you did a bad job

Or you just didn't know about all the edge cases, niche ways of using it, and the regulation on page 475 of some arcane law code that requires that something be so.


> Or you just didn't know about all the edge cases, niche ways of using it

Sure but this goes both ways. Developers cannot know all future requirements when writing the software. SaaS models largely alleviated this problem because people are continuously working behind the scenes adding new features and fixing bugs


What you say is true. But it's really hard. It's almost impossible to get all the corner cases and only-documented-in-the-code requirements. It's, yes, a criticism of management, but it's not just that they managed the process badly. It's that they chose to go down that path at all.

It is also true that old code gets harder and harder to modify, and requirements do change, so you have to modify the old code. But you're not going to make good choices on whether or when to rewrite if you mis-estimate how hard the rewrite is to get right.


> What you say is true. But it's really hard. It's almost impossible to get all the corner cases and only-documented-in-the-code requirements.

See my other comment about continuous improvement, iteration and SaaS. We can’t expect people to get things right the first time, or ever for that matter.


Code doesn't rust or corrode or accumulate bugs as it gets older. Obviously things can change around it, i.e. Dependencies and that can lead to some mandatory changes but really there are plenty of software that is old. Much older than 20 years and it's "ready" i.e. Has All the features it needs and it's still being built and run on modern computers and it works fantastically. My own software goes back +15 years now and over the years I've ported it from 32bit builds to 64bit builds and from Qt4 to Qt5. Other than that very few new features or changes in the past years since it has all the functionality that is needed. And it works great. What's wrong with that?

Changing code (or UI or UX or anything really) just for the sake of changing it is just nonsense busywork to me.


A bit oversimplified in my opinion. Sure, SaaS only works as long as it is online, but with reproducible builds and versioned dependencies most software can still be built for a very long time. So i think for OSS the argument is not really valid.


I really, really doubt I can run most 1980s software on a modern machine without effort.


https://archive.org/details/historicalsoftware

https://archive.org/details/softwarelibrary

You literally click a button in your browser and emulate right there. Have fun.


Folks I work with support some clipper apps written in the late 80s.

It’s getting harder now with 64bit windows not supporting 16bit apps — they are living on a “legacy” Linux VMs tenant in Azure now, and probably will continue until around 2030. Possibly longer, as they are on emulated DOS now!


DOSBox has been something of a godsend for getting a lot of 1980s software working again. Not so great if you also need to interface with proprietary ISA cards, but lots of software can be brought back to life.


Not that this is relevant to DOS software, which needs whole-system heavyweight emulation right from the start these days, but...

It would be interesting to have a solution which allowed you to progressively wrap software to allow it to run in just as much isolation as it needs, from the ultra-lightweight scripts that just futz with LD_PRELOAD (when most of the environment is still there) to actual Docker images (when the OS is still there) to images to run in a hypervisor (when the hardware is still there) to full-on heavyweight system emulation like SIMH, for when even the hardware has gone.

It's all very Tao:

When the userspace libraries have gone, there is LD_PRELOAD

When the filesystem hierarchy is gone, there is Docker

When the OS is gone, there is Xen

When the hardware is gone, there's SIMH. Hi, SIMH!

https://www.youtube.com/watch?v=Vkfpi2H8tOE Laurie Anderson — O Superman


Better yet, PCem. It's a lower level approach, because it emulates the actual hardware (which also means that you have to hunt down the firmware for said hardware, install DOS on it etc). But when it comes to fidelity, it's as close to real metal as it gets - and it's very close indeed. They even have 3dfx Voodoo emulation, for those early Glide-only games.

https://pcem-emulator.co.uk/status.html


> support some clipper apps

As long as you reindex often enough, your Clipper apps should last forever, in my experience.


But it's probably not too hard. Try running something that needs Flash player, or IE6.


Setting up a XP VM shouldn't be that hard.


Microsoft offers virtual machine images for that.



Unfortunately they stopped offering the XP/IE6 VM. I believe they also stopped offering the Vista and 7 VMs as well.



Any modern Linux or BSD box is full of 1980s and even 1970s software. Granted, most of it is still actively maintained, but old doesn't have to mean legacy.


The only really hard things from then to run are things that did tricky hardware-specific memory management or otherwise used hardware that is no longer available.

The only one I can think of offhand is Harpoon II Admiral's Edition, which had its own built-in memory manager that doesn't play well even with virtual machines. But that is one where they did eventually re-release an updated version that runs on modern systems.

Edit: The original release was 1989, Admiral's Edition was 1996. I don't know if they did those memory tricks in the original, but it's doubtful.


t's easily doable by running a virtual machine. If you have an OS compatible, it will run out of the box. Now try to do that with one app that depends on one of the cutting edge development stacks of say, 2011... Good luck telling me how you can run it as easily as I can tell you about the last version of Netscape for instance - just run a Win98 VM and voila, you're set.


Depends on which software you're talking about, DOS box is pretty easy to use.


I do it every day, it's quite easy actually.


I can run 80s software on a modern PC very easily.


This graph of software dependencies from various sources is fragile but exactly the sort of issue IPFS was designed to solve.

If something like node.js stored all sources in IPFS then the data would remain as long as someone builds the software.


As long as you vendored your dependencies it all actually still should run fine. Bitrot is not a new phenomenon. But it is incredibly frustrating.


More than just your dependencies. I've unfortunately seen multiple systems where you basically would need a snapshot of the system it was last successfully built on to build it again. Ideally, that would be less common these days (people using modern CI really helps), but in practice I fear more systems are being built that way than ever before.


Most languages do this pretty well. ruby and rust package managers store all past versions of packages so its trivial to get the same setup again. Its C where the problems come from. Finding the right packages to build something is quite a task since every distro calls them different things and has different versions and only keeps 1 version at a time.


To be fair you could track down the git repositories or tarballs and compile and install the right versions. If the author used autotools as is standard, the version number should be listed.

It's not ideal of course. I wish it were standard to build all dependencies from source and statically link a big binary like Go does.


Why 2005?


That's about the start of the Ruby on Rails hype.


I had to get a Rails project from 2008 back up and running not so long ago, wasn't much more trouble than compiling ruby 1.8.7 (or was it 1.9.3?) and installing some gems. Admittedly, there were only a few dependencies in the project outside rails itself.


If you are trying to run a rails app from 2005 your biggest issue is the trillion security holes in your app and not getting it running which is usually not too hard.


That is absolutely true, and probably equally true for any other web application that was wasn't maintained for 10-15 years. But this is article is talking about the ability to run older applications, not about safety concerns about doing so. I just wanted to point out that, while Rails is far from perfect, from a software-archeological perspective there's not really much ground to pick on it. The ruby gem ecosystem (again not perfect) is at least pretty stable in the sense that 10+ year old libraries are readily accessible (as they should be!).


Don't forget the JS builder/bundler explosion


That's when everyone started assuming always-connected high-bandwidth internet.

Before that, given the relevant files on a disk, you could rebuild the software. And even on later systems, given a sandboxed VM or emulator, you could run it. After that, neither is guaranteed or even expected.

Unless you also have a fully-functioning snapshot of the entire internet from that point in time also on your disk/VM.


Surely this is a lot to do with browsers and js frameworks always changing. Everything non-browser related is probably fine.


A lot of desktop software has in recent years moved to assume always-on high-bandwidth internet connection and/or moved to a subscription model.


What was it like setting up CI in the 80s or 90s "without too much hassle?"


You typed the compile/build command and it compiled/built. Then you had an executable you could test and/or deploy. Or you got errors, which you resolved until it did compile.


In other words, you had one physical machine per platform, and at least one developer manually running each build on each platform. Or perhaps one developer per each if you've got an even bigger budget.

I'm just imagining a quasi-Luddite hacker collective that eschews automation and does their own builds directly on the physical hardware to keep themselves "in touch with the hardware." That actually sounds pretty fun and social. Plus I wonder what the effect would be on catching regressions/issues before release compared to modern CI.


"The developer guide to handling agile" handbook could help if it existed.


It's more like fossil remains from the Mesozoic than litter.


To further the analogy, blockchain is the reusable metal straw that costs 100x the energy to produce, lasts forever, but mostly gets shown off a couple of times before being thrown away anyway.


For that to be true, there would have to be no actual domestic crisis, with poor countries with lax oversight responsible for 99% of the damage, but with politicians passing laws making our lives worse at home because it makes uninformed people feel good.

I'm stocking up on plastic straws and bags while I can.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: