The older and more experience I get the more I find myself turning into a complexity zealot, to a really extremist extent. Every time I see something introduced at any level that isn't 100% strictly necessary to the problem being solved it triggers a kind of accumulated PTSD is almost physically painful.
I know everyone thinks they like simplicity but my observation is that about 95% of software developers have a complexity fetish. Literally any opportunity they get to introduce something over and above what is essential to the job they are doing they will do it. For many its almost built in as a part of their value system (if all I do is solve the basic problem then I'm just a mediocre mundane developer right? I want to be a rock star who develops frameworks and invents new things that other people use). And yet the epitome of genius in my mind is finding the solution that solves the problem without introducing anything at all.
Again, I know 80% of the people reading my comments right here will nod their heads and identify with it and yet still go away and see themselves as exceptions to the rule and create another abstract class or define an optional configuration file or make their code automatically parse an extra environment variable with a regular expression or .... you get it....
We had to add something to the database the other day. Big argument. Should be one to many? many to many? what if this or that happens? what if requirements change? You know what - for the requirement wa actually had it was solvable with a single integer field on an existing table. But this is a battle that has to be fought ... over and over and over ...
I had a phone interview the other day - it was set up through hired, an online platform where recruiters ask you to interview. The recruiter used “good time to meet” to set up the interview which I then had to accept in a Google Calendar. When he called me my iPhone helpfully automatically sent an unknown caller silently to voicemail but I saw it ringing and tried to pick up, but too late. I then called the number back - it was a Google Meet computer voice that asked me for a PIN number rather than being the recruiters actual phone. I didn’t get any PIN number and the calendar invite didn’t have one. The recruiter didn’t provide his actual phone number only an email. I tried sending an email but didn’t get a reply. Knowing that he was using Google Meet I tried to call him by Google Meet but there was no answer. I wasn’t too bothered by it but I marveled at all the stupid layers of unnecessary technological complexity stacked on top of each other that had led to this situation. Is all of that really necessary to make a phone call?
I have been recently hiring and the easiest process for me is just to receive an email to our hello@dyvenia.com address. I can start building rapport and communicate easily with potential candidates. All I would need additionally is an automated assistant to store the cvs and my feedback and send reminders, it could all be done with email.
I need to go to the bathroom. The simplest thing that solves my immediate problem is to urinate in my pants. I ate a bag of chips and now I have an empty bag to dispose of. The simplest thing that solves my immediate problem is to throw it on the floor.
So it's clear that "the simplest thing that solves my immediate problem", like simply adding a new int field to the most convenient table, can compound into an awful mess. But perhaps "simple" is not the right word here.
I like Rich Hickey's talk on simple vs. easy; we're both using the wrong word according to him. "Simple" means not intertwined or tangled; well-organized. "Easy" means "close at hand" or "familiar". We both mean "easy" here.
That being said, your examples of complexity fetish do indeed sound awful. Abstract classes, optional configuration files, environment variables and regular expression; we can agree those are awful. Those are neither easy nor simple. But the problem is that they're not discussions about the domain, they're truly unnecessary. Maybe that's all you really mean.
>We had to add something to the database the other day. Big argument. Should be one to many? many to many? what if this or that happens? what if requirements change? You know what - for the requirement we actually had it was solvable with a single integer field on an existing table.
Agreed about not inventing requirements, but questions about "how is this likely to change in the future?" are much closer to productive discussion. Discussions about one-to-many vs. many-to-many can also be the exact discussions software developers should be spending most of our time on (although don't get me started on the awful database designs most software has, so these discussions may be inane for that reason alone).
> I ate a bag of chips and now I have an empty bag to dispose of. The simplest thing that solves my immediate problem is to throw it on the floor.
Maybe that's the best solution for the long run, instead of designing and implementing a whole garbage disposal system from the ground up for only one piece of trash.
My problem a lot of software developers are trying to solve problems they don't have and never will. This consumes time and adds unnecessary complexity to their projects.
People are bad a predicting future. Especially when this predicting is done in 5 minutes before implementing something and not dedicated activity which includes interviewing of users and domain experts.
I've seen this many times: programmer is asked to solve a small and well defined problem. Instead programmer generalizes it and makes something more universal with the requested feature as a special case. More often than not nothing except this special case is used.
Or working on some new project they add a feature which looks useful in theory, but ends up being rarely/never used. It may look easy to implement initially, but over the years maintenance cost can be much higher.
People are amazing at predicting the future, and in some ways we are better at it than remembering the past. That's because we use the same machinery to do both. We partly remember the future, and predict the past. This ability breaks down with complexity and abstractness, as well as with novelty, all of which are involved in software (I can tell you that the sun will come up tomorrow, and where I should move my hand to catch a ball, but I can't predict all of the defects my software will have--though if it involves X.509 certificates, I can tell you exactly when a particular sort of outage will occur)
For me too this took years and years to learn.
It's a hard lesson that seems can only be learned by walking the road and learning from working on a particular piece of software for a long time, at least that's what it took for me.
I guess it's called experience to know when to design and when to just implement. Somebody wrote somewhere for example that if you're not going to need a particular piece of code in more than 3 different places, don't write a function for it.
As a newbie you would totally want to write a function for it, thus also making it harder to read the code as you would have to understand the function in order to see what it does in that context.
Also thinking in terms of "Do I really need this feature in future use cases?" is something I don't feel you can assess when not having the experience of already have peeked into those future use cases, where in many cases you will not ever need that particular function in more than this one place.
But can you learn how to design a reusable system without first doing it in the wrong places ? That's something that is hard to say, I don't know.
Could you teach somebody who wants to build complex, reusable components not to do it and just stick to simplicity ? How would one then know how to build those reusable systems where you need them ?
Maybe we should focus more on training both simplicity and complex design, but rarely you can do that when you are under pressure and working on real life software.
Haha, on the other hands a lot of developers never grow out of always developing betas and their only concept of programming is developing betas and then dealing with fires all the time.
Haha, touché; I thought I had come up with completely unassailable examples of obviously bad choices but you've made a good point that a single piece of trash on the floor may occasionally be the best option. Engineering is all about tradeoffs, even in the extreme.
This is why my dream is to run a one man software business at some point. At least then when things have been stupidly over engineered I'll have only myself to blame.
I don't have many years of writing software on a team left in me. It really manages to suck all of the joy out of both the process and the result.
You have to remember that for employees or perhaps aspiring employees, solving the problem is rarely the only priority, if it is one at all.
Impressing others on the team, impressing the boss, making complex stuff for the resume, seeming like a contributor in meetings, having interesting tasks, etc.
I've definitely over-engineered things at work because I was bored. There's no reason the login form for an internal tool with six users needs to handle 10,000 requests per second
This is important to watch for. You can seriously cramp someone's style if you actually solve problems - especially if it's something being milked for complexity theater. Solving problems strategically can be very impressive to management and destructive to gold bricking colleagues.
>Impressing others on the team, impressing the boss, making complex stuff for the resume, seeming like a contributor in meetings, having interesting tasks, etc.
As a boss, I feel like I'm VERY close to a life lesson here.
Put most simply, there is very often nothing that connects an engineer’s best interests to how much the customer likes the product and plenty of reasons that a product might seen as a means to an end by the engineer, most significantly the fact that they probably won’t be there in 1-2 years.
I think that once you reach a certain point, you stop believing in magic solutions to all your problems.
You’d think you find this out over the course of a single project, when every new thing you introduce requires extra pain to support in the future, but in reality it takes years of this happening many times before you internalize that.
I do think it’s a bit dependent on the organization you work for though. If you know them well, you’ll realize that adding the single integer field is not going to cut it because they’ll come back in a month asking for the 1 to many relation (if you try to clarify in advance, of course they tell you they don’t need it).
I fully agree with not building for requirements you do not immediately (next month or so) anticipate though.
That said, as you say, I’ll probably go off and add some unnecessary complexity somewhere when I go back to work tomorrow ;)
"The epitome of genius in my mind is finding the solution that solves the problem without introducing anything at all."
Agreed. Albert Einstein once said "Everything should be made as simple as possible, but no simpler." This tautology sounds a bit silly, but I think the point is that the escalation of complexity needs to be fought with intentional focus.
That said, I think abstraction can be a useful tool for fighting complexity, even if introduced before business needs demand. In other words, developing a series of independent one-off solutions could add more complexity to your system in the long run than finding the right conceptual model that unifies them proactively.
Good abstractions don't just exist to reduce code reuse; they help people reason about software. They turn logical operations into higher order concepts that are easy to think about. If this weren't true we would write all our programs as single, very long functions.
The right abstractions will reduce complexity not add to it. Consider the following information architecture:
Cats are warm-blooded animals that produce milk and give live birth.
Dogs are warm-blooded animals that produce milk and give live birth.
Humans are warm-blooded animals that produce milk and give live birth.
vs
Mammals are warm-blooded animals that produce milk and give live birth.
Cats are mammals.
Dogs are mammals.
Humans are mammals.
I would argue the latter is less complex even though it contains an abstraction.
Also, you can't always fix things retroactively. As an IC, you don't control what efforts are funded. Not all organizations invest significant resources in cleaning up tech debt, even it helps long-term productivity.
I think the point of the thread is that you're probably not going to need a platypus or a human or even a dog. If the requirements are that you support a cat only, then you just need a Cat class and no big abstraction taxonomy.
“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”
Apparently this is what Einstein has been documented saying:
"...the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience."
It's certainly got the same vibe. I certainly don't want to begrudge Einstein credit here. If you have the dubious honour of knowing physics and German, then reading his work (e.g. [0]) will leave you in no doubt that he was a shining exemplar of this principle.
He does get a ton of stuff misattributed to him though.
Simplicity and complexity are terms we use a lot to argue that something is good or bad, but nobody agrees on what is or not simple or complex and how that should be evaluated.
As an example, both Clojure and Go are programming languages that present themselves as focused on simplicity, yet they are in complete disagreement with each other. They have a completely different understanding of what is complexity and how to approach simplicity.
Because of this, when I see someone saying that something is "simple" or "complex" I personally consider that they really express how familiar they are with the approach or concepts used. While we can all agree that modern software is clearly overly complex due to the massive number of moving parts and opportunity for bugs and other issues, I don't feel that we have a good framework to actually evaluate that complexity in a way that could then be discussed in a more serious, analytical way.
Yep, the danger of becoming a zealot and simplifying components too much is it can increase complexity of the system as a whole. It's not an additive property, you need to look at it from different angles and layers and understand how it leverages. Gos lack of generics being a common example of this. Another example: UDP is less complex than TCP - until you experience packet loss and reinvent re-transmission yourself in the application layer.
"I know everyone thinks they like simplicity but my observation is that about 95% of software developers have a complexity fetish."
You do not have to be working in the software industry to see this, either.
It's no wonder the majority of these complexity fetishists are paid either from advertising (like journalists) or from venture capital. There is not much careful thinking behind those sources of money. Its "dumb money". Further, no one in their right mind pays to license the "solutions" these "devs" are producing. That is why almost all of it is free. The stuff that is licensed for a fee is marketed to birds of a feather, complexity fetishists within organisations with the power to dictate purchasing decisions.
If the dumb money dries up, the market for the complexity fetish may suffer. Common sense may prevail. Complexity fetishists may find no one is willing to join them in their delusion of "simplicity". If the dumb money keeps flowing then disregard this comment as irrelevant and carry on with the delusion. The parent comment seems to indicate not everyone is fooled, though.
Completely agree. People almost always underestimate the true cost of adding new frameworks/libraries/configuration files to a project. And they almost never remove things that turned out to be unnecessary. And then they don’t understand why it is so hard for them to maintain the software, looking around for something else they can add to “solve the problem”. Face palm moments galore.
This not unique to software though. We see the same with the law. Lawmakers often underestimate of even ignore costs imposed by a new law. And old (often known to be bad) laws are repealed slower than new laws are created.
May be it is basic human psychology (loss aversion?). People are happy gain something (even not very useful) and unhappy to part with it.
That would make it a framing problem:
A loss of complexity can also be seen as a gain in simplicity, freeing up time and resources to be spent elsewhere.
And about laws: I totally agree that it is a huge problem, maybe even a bigger one. Because I have never heard lawmakers talking about technical debt or legacy code. So it seems they are not even aware :)
The personal evolution is very interesting: from complexity zealots to simplicity zealots.
I suggest this has to do with the fact that at very local scale, complexity is not a problem, and individuals are therefore concerned with more of an intellectually elegant solution, irrespective of complexity.
In addition, a complicated but elegant solution speaks to our intellectual passions.
Finally, I think that we compete on those terms far more than we like to admit. Everyone wants to be the 'genius' that others envy, with all the followers, the insightful blogs etc., at least on some level.
As we grow, mature and perspectives and horizons broaden, have little to prove to others, the novelty of a solution may take second set to 'getting it done' and this is when we start to develop a real instinct for the cost of code and complexity. Then we realize that most of our abstractions are leaky, that tons of code is unnecessary and make work, the 'horror' of complexity starts to come into clarity ... and we kind of turn into zealots.
I'm in the simplicity zealot phase and I wonder what's next.
I suggest it's probably just the 'Sausage Phase' wherein we accept what's there, and accept the limitations of individuals and operating environments.
I wonder how we can teach new devs to move through these phases quickly.
I'm in the simplicity zealot phase and I wonder what's next.
I can kind of sense what the next phase is. I don’t know if it will be the case for sure, but being in this frustrated phase inevitably leads to a quietness inside. When you start out the quietness is not there, so you are always in this anxious state implementing everything under the sun. But now in the aftermath of all the internal directionless energy, the somberness makes me want to make solutions. Build that simpler framework, write that simpler language, make that simple tool/product. Not because you feel the pressure, but because you are finally quiet inside to think peacefully.
It’s odd, we always complain about meetings being a distraction, yet, it’s the endless pursuit to live up to the ideal of ‘being a great developer that’s worth the money’ that has always been the most noise. The turmoil manifests in complicated codebases.
> , yet, it’s the endless pursuit to live up to the ideal of ‘being a great developer that’s worth the money’ that has always been the most noise. The turmoil manifests in complicated codebases.
I'm going to print this. My preachings as a 28 year old have gone and fallen on deaf ears. But I will preach it till I drop dead - Complexity results from people justifying their paychecks. No more. No less.
That's really smart, but I think the 'directionless energy' is phase 1 i.e. not aware of complexity.
Phase 2, 'anti complexity' is more one of frustration, trying to stop arbitrary things.
I wonder if the 'quiet' is just rising above the noise and accepting it is what is is, and accepting some devs will build to much like Wild Stallions but that they are useful for some things obviously.
>Again, I know 80% of the people reading my comments right here will nod their heads and identify with it and yet still go away and see themselves as exceptions to the rule
We need people like you. If I could, I would hire you and make you "unfireable". Your job would be to go around and make people do the ABSOLUTELY most simple work.
You could start by shadowing me and yelling at me every time I attend a meeting that I don't need to be in.
i don't think anyone is ever consistently correct about which pieces of complexity are the right ones to take on to make things simple. worshiping & empowering someone who'se only message is that other people are inferior & crude & bad, & using that to assume they must be the right proper judge- the platinum citizen- is in my view an abominably horrific way to enforce a narrow viewset.
and it's not who i nor those engineers i respect want to work with: the fear minded. the Muse denialists, those whose bit is to tell you to reel it in.
I'm ancient by tech standards (lol), but one of the things that keeps me in analytics/data science/stats is that I've always wanted to focus on the problem, not the technology. tech is a means to an end, ideally.
But even in my field, culturally we're often under constant bombardment by people internally and externally with new abstractions, tech, brands, techniques, buzzwords, frameworks, methods, committees, collaborations.
The war is literally never-ending...
If it was JUST human nature or something, I think I could silently let it slide, but imo there's a level of maturity/seniority where you realise this attitude and phenomenon limits the kinds of problems you can actually solve.
sure, politics and self interest is everywhere, but as the article is hinting, eventually you reach a point where the complexity means your system will fold and be swept away. And genuinely complex problems or thoughts cannot be had or tackled if the foundations of your intellectual or tech tower aren't simple, strong or sturdy enough.
A lot of time would be saved and promotions earned if we just did the dumb simple thing up front. Later on when everything is on fire someone can migrate to the robust complicated version and get a nice performance review out of it, win-win.
I believe you're identifying simple and fundamental, which is risky, because it's not explicit for whom the code is supposed to be simple (reader or writer).
Writing good code for a reader is about abstracting to the fundamental level of atomic logic. But this requires quite a lot of practice and experience, not to mention discipline.
If the code is _simple_ from the writer's point of view it's probably filled with tacit assumptions (future breakage) and lazy decision made to get where this thing just works.
Good code requires insight and planning. _My code_ is written from the top of my head to solve a specific issue. I'm an amateur in programming with a long enough academic education to know in detail how I suck :)
If asked I'd probably say I write simple code. But I don't necessarily intend well structured and stable, just that it was straightforward for me at the time of writing it (+ n days).
I agree. But sometimes the obvious and simple solution is not the simplest one. Taking your example about the database: if your table is over 70 million rows and you are using Mysql < 5.6, trust me: it’s simpler to add an additional table than to run a migration on such a big table that will lock it on production.
It is an unsolvable problem. Arguably the reason pre-NT Windows and pre-X MacOS were inadequate is that they were far too simple. Rewriting the systems and basing them on server-like architectures was absolutely necessary to achieve some level of robustness.
In my experience no amount of zealotness will produce better software. Abstractions are good for things you really need but you never know how the product will evolve and what you really need...
There is one more level where you hate everything, but then embrace loving the people that make shit.
I bought a new truck, and after putting 100 miles on it the transmission started acting strange. I take it in, and they don't know what is wrong with it. So, they are going to replace the entire transmission next week.
Now, I could say "oh, why did I buy that brand" or be like "lolz, these people are n00bs". Instead, I realize that they are making something at a massive scale and bad things will happen, so I'll accept the inconvenience and move on with my life. Shit happens, and that's life.
I've hated software for a long time because I value precision and reliability, but these are exceptionally hard to scale organizationally. Suppose you fix reliability today, well tomorrow's feature could just fuck it up.
Now, I still hate software, but I come at with a mindset that software is an artifact of a culture which requires nurturing and love.
I take it in, and they don't know what is wrong with it.
Modern vehicles are disturbingly complex so I'm not surprised, and a lot is being done in software. For your sake I hope the replacement solves the problem, but I would also not be surprised if it was ultimately a software problem elsewhere.
(As an aside, you may enjoy the Precision Transmission channel on YouTube, although that focuses on the mechanical side of things.)
The melancholy is like that of living in a burgeoning third world city. It’s like, if we just focus on this and that, roads and water, keep corruption down, we’d have one of the best cities in the world - as good any other. Our people are good, but sometimes they can’t make sense of traffic laws and crime.
It’s the only city I know, and I often wonder if we just took care of it, where we’d be.
There are thousands, if not tens of thousands, cities like this around the developing world. I'd wager the majority of the world's population lives in almost-there-but-if-only cities. Certainly all of India, and a fair chunk of China.
"I take it in, and they don't know what is wrong with it. So, they are going to replace the entire transmission next week."
Isn't the nightmare scenario though where the complexity is so much, they can never fix the transmission - because the car is so f'ed that it will continue to break itself and really you should abandon it and buy a new, simpler car.
We can see this happening actually, today - in Afghanistan the organization complexity has evolved to the point that the Afghan military doesn't actually exist, so a third rate Taliban military can make an utter mockery of what is/was normally consider to be the strongest military in the whole world...
It does happen in software too, almost every bit of software that's "old and decrepit" once was shiny and new, simple, easy. They all have some glaring issues which only get worse over time, and so it has to be rewritten or replaced. A famous example would be Windows Vista - a rewrite of an OS from the XP days which was staggeringly bad it took gutting most of it with Windows 7 to get back on track. Arguably, Microsoft never recovered - they went on to fail at mobile, and their strategy to "get back on track" in the cloud is in large part about using a stack they didn't invent (networked linux systems)...
Another way to look at it, software today commands such complexity it almost requires mega-corp resources to develop. A Google or an Amazon can pilot a product and make it worthwhile/work, but the tools do not allow smaller competitors to compete well, a lot of smaller software shops are in danger of going under. I'm not certain that this is completely good or bad, but I certainly disagree, to use another metaphor, that an international business cartel is required in order to produce a tasty, let alone healthy burger, such as from McDonalds, when I know for a fact that ground beef with a bit of spice mixed in makes for a perfectly delicious (cheaper, healthier) alternative.
Your truck is probably similarly overpriced trash/junk.
Often software is too simple. I just recently got my first "Smart Lights" (my mistake). I'm using Apple Home to control them. I put 3 bulbs in a 3 bulb standing lamp. Home has no way to for me to designate those as a single lamp. To Home they are 3 lamps. The solution is supposed to be use normal bulbs and a smart switch. But that precludes being able to set the colors of the bulbs. In other words, they thought it was simple but it's not.
It gets worse. At first I thought this would be cool, but now I realize if I have a house guest, they won't be able to turn on the lights if I left them off. Arguably, if the power is cut and comes back on, maybe the lights should default to full white? At least then, in an emergency they just work. But again, designing lights that do that would be more complexity, not less.
I can even add another wrinkle. My Siri is set to Japanese. So, my English speaking house guests will not be able to ask for the lights to be turned on if I had a HomePod, which I don't. More complexity because the world is complex
This reminds me of a talk by Venkat Subramaniam, called "Don't Walk Away from Complexity, Run", here's a recording of it: https://youtu.be/4MEKu2TcEHM?t=343
In it he talked about inherent complexity and accidental complexity, the former being a part of the domain that you're working in, while the latter is introduced accidentally and isn't actually necessary to address the problem at hand.
Often people talk about managing and decreasing the accidental complexity, but what if instead you could reduce the inherent complexity by just trying to do less? For example, instead of trying to do advanced font rendering for almost every single language and writing style known to man, why not just limit yourself to ASCII, left to right typing and monospaced bitmap fonts in certain devices? I'd argue that in many cases, such as embedded devices, that's all that's actually worth doing, though seeking out ways to not drown yourself in complexity due to having limited resources to get stuff done applies elsewhere as well.
As someone else pointed out, a light switch would suffice nicely, instead of trying to write software to handle every possible use case for smart lights. Alternatively, just have an API for them and allow the people to write whatever code that they want themselves, instead of having a closed, non-extensible solution.
That said, i agree that oftentimes it's really easy to misunderstand how difficult something is, but also because you don't really care about many of the edge cases, such as right to left typing or handling kanji rendering etc.
>For example, instead of trying to do advanced font rendering for almost every single language and writing style known to man, why not just limit yourself to ASCII, left to right typing and monospaced bitmap fonts in certain devices?
That works wonderful if you never deal with peoples names and have no interest in making your device available in France, Germany (65 million and 80 million people), or Japan (100+). That is leaving a lot of money on the table.
> That works wonderful if you never deal with peoples names and have no interest in making your device available in France, Germany (65 million and 80 million people), or Japan (100+). That is leaving a lot of money on the table.
That is a valid point, but i'm thinking more along the lines of smart IoT devices and other embedded settings and there indeed you'll find that localizing your device is sometimes unprofitable and doesn't provide much value at all.
Consider why almost all of the popular programming languages use keywords in English? Because it's largely become the lingua franca of the industry. On a similar note, i'd argue that making your smart thermostat output kanji or text in german would actually be more problematic, because anyone who'd end up with the device and wouldn't know how to change the language (provided you even can change it, instead of regional skews) and would not speak those languages would find it useless, rather than someone from the aforementioned cultures, who's more likely to know at least some English.
To that end, i believe that it's possible to make devices available in France, Germany, Japan and elsewhere even if you don't localize them in the local languages there. Of course, your market penetration might be lower than when going the extra mile, though you should really consider whether it's worth it.
Furthermore, it's not like the situation with names would be utterly hopeless - romanization of names is definitely possible and limiting yourself to just ASCII actually makes your code simpler in many cases, which has historically been one of the reasons for various other systems becoming confused with Unicode and breaking in disappointing ways.
Of course, i'm speaking from my subjective experience - living in a country with a few million inhabitants, i'm always using software that's in English. When it's localized in my country's official language, i look for ways to change it back into English, because the localized terms sound weird, the translations are often either badly made, or just feel uncomfortable. This is especially true in software tools like GIMP or Inkscape, because using a non-standard language ensures that you won't be able to browse help forums or the documentation, because almost always not all of it will be translated, or the menu items and nested paths will sound so different that it will be impossible to find what you're looking for. The same applies to hardware in my eyes.
That said, there's probably a world of difference between a thermostat and a smartphone.
> For example, instead of trying to do advanced font rendering for almost every single language and writing style known to man, why not just limit yourself to ASCII, left to right typing and monospaced bitmap fonts in certain devices? I'd argue that in many cases, such as embedded devices, that's all that's actually worth doing
I'm a firmware dev, so I'll tell you how this goes. Over the next 5-15 years project requirements will change, feature creep will set in, and you'll eventually end up with a full blown font renderer. Except it will be utter garbage, impossible to maintain, and it will infest the entire codebase making it incredibly difficult to replace.
You shouldn't go for the most general solution rather you should select the solution that fits the current and near-term requirements. You had best know what the general solution is though, otherwise you're doomed to reinvent the wheel poorly.
Why should our advanced computers not be capable of handling the most basics of communication with every person on Earth? Should we also drop accessibility?
It is an easy trap to fall into. Sometimes more is actually more.
> Why should our advanced computers not be capable of handling the most basics of communication with every person on Earth?
I'd argue that ASCII in English is probably one of the more basic forms of communication with people that you can implement, due to how widely supported the character set is and how many people know English; no other pairing will get you as close to having your device be usable by a global audience than that pairing. Of course, that doesn't answer your question, so let me try again.
Because there are devices out there which simply don't need to do this. Not every computer out there is an advanced one. Not every computer out there will interface with every person on Earth. A lot of complexity would be introduced for no gain in many situations, therefore it makes sense to choose the simplest option.
For a contrast, allow me to show you GNU Unifont, which attempts to encapsulate all of Unicode BMP, the font weighs about 12 megabytes which is more memory than some devices actually have to use: http://unifoundry.com/unifont/index.html
From a different angle - it would be nice if all devices were able to do this, but there are so many different languages, types of writing and characters, that supporting all of them will be too troublesome, since the underlying programming languages, the ecosystems around them and libraries won't support them out of the box either.
Not supporting those at the lowest levels of abstraction means that you'll be trying to tack the support on later, like what moment.js attempts to do with the date and time functionality within JavaScript. And that's just a web application example, whereas in actuality these problems are probably most pronounced in embedded devices.
> Should we also drop accessibility?
This does feel like a bit of a strawman, but i'd like to suggest that many radios, thermostats and a variety of embedded/IoT devices don't have a lot of consideration given to accessibility in the first place. I can't recall the time when i could control any of those devices with my voice (apart from integrations with smart systems to control the home, though currently those are a rarity, maybe things will improve in 20 years).
I actually recall a radio once that attempted to support multiple languages on a LED display ( the kind that shows numbers typically, by lighting up segments on it, much like this: http://www.picmicrolab.com/wp-content/uploads/2017/06/Alphan... ). Let me tell you that their attempt at supporting Russian wasn't legible and as a consequence i simply couldn't figure out how to switch back to English, which was at least a little bit more readable in comparison. I fear the day when someone attempts to encode kanji in a similarly limited environment to results that actually make the device less accessible.
I might be horribly misdirected, but that's my answer to you - we shouldn't attempt to do things that aren't feasible with our current technologies, since language support is currently rotten to the core in many of them. Given that this support isn't available out of the box, you'd be pretty hard pressed to support multiple languages in your little Arduino project with a LED display, especially because attempting to do so would make you miss out on actually making it do what you want it to.
Maybe some day i'll just be able to put this in my codebase:
printOnScreen(translate("Device not connected!", getCurrentLanguage()))
And have simple translations be made during compile time into all of the languages that the little project should feasily support (with localization files, which can be edited and then transferred over to the device alongside executables), but until something like that becomes an actual reality in all forms of computing, it's probably not worth it.
Domains in which the above comment doesn't hold entirely true:
- desktop computing (even if translations are often poor and OS support for localization can be lacking)
- web development (most large frameworks support plugins for localization, yet loading the localization from files outside of build time compiled ones still is not widely supported)
- mobile computing (given that OSes like Android are pretty decent in this regard)
But it doesn’t replicate the actual features people want from smart lights which is
1. Turn them off and on from my phone, or voice assistant thing.
2. Changing the color and brightness of the lights.
3. Doing 1 and 2 automatically.
Having lights change from white to yellow automatically based on the time is something I don’t want to give up now.
It’s the switch that’s the real issue. If your switch was a panel with an on and off button that controlled the smart lights and your lights were plugged into a normal outlet there would be no problem. Consumer tech is trying to retrofit in an environment that hasn’t caught up to commercial lighting systems.
For the record, the above light problems are all solved within Philips's Hue - although they also needed many years and user complaints to irk out such cases.
Even more relevant today. It seems like most software by both large corporations, and open-source projects, is bloated and poorly-designed - from both UI and performance perspectives. Few programs respond as quickly as they should, given project requirements and hardware.
I think this is one of the reasons I'm enjoying embedded programming so much - I can make things as responsive and clear as I'd like, with no OS layer to build upon. I've been doing it in Rust, and among the Rust embedded OSS community, there's already a tendency towards complexity-by-default. Examples are execution frameworks without regards to if they fit a given problem, and over-use of traits, which makes code significantly more complex, with questionable benefit.
Challenge: Come up with a list of websites that scores above 95 on Google page Speed insights. HN is one. The article linked here is another. Generally, programming-language home pages do well. What about the startup sites under `Show HN`?
What I can’t stand is even just operating systems alone aren’t responsive. We can render complex 3D worlds at 120FPS but somehow opening a freaking menu causes noticeable lag.
To be fair to the operating systems out there, GUIs are part of the program, not part of the operating system. If a menu is unresponsive it is far more likely that the writers of the software messed up and put some heavy lifting on their event loop or misused the GUI framework that they choose rather than any fault of the OS.
That having been said I do agree that it is disappointing how sluggish some apparently-straightforward applications have gotten despite the hardware performance increases over the years.
Take a Windows 10 PC that just booted without loading any 3rd party stuff. Wait say 60 seconds to should really be ready, hit the windows key and there is a noticeable delay. I get it not everything needs to be instantaneous, but even basic OS features like browsing files on a local SSD has all kinds of little pauses.
Mac, Linux, iOS and Android all have the same sorts of issues.
Echoing this. The mainstream compositors all seem to be plenty fast even on very old hardware provided that you have GPU acceleration. Ranger is written in Python of all things and spits out colored text (which has quite a bit of overhead) yet I hardly perceive any latency if I'm running in a decent VTE.
In Android I don't recall the various stock AOSP components having any performance issues, just the custom carrier and vendor stuff.
With Linux as of late, once heavy I/O is happening it doesn't matter what program is running because I can't even move the mouse cursor over to click anything.
Most GUI driven OSes at least endorsed a framework/toolkit even if it's not exactly part of the OS. Some of those make it very hard to do things quickly. I don't think you can make a fast Android app using the endorsed toolkit. It simply takes too long for the widget constructors to run.
Rendering a 3D world doesn't require checking for "suggested apps" (aka ads) and/or sending telemetry on every frame so some oxygen wasters can justify their careers. Opening the start menu on Windows 10 does.
Video games load everything ahead of time and pull user input every frame. Desktop environments tend to be event driven and load stuff as needed. This is a legacy mindset IMHO. Gnome is getting triple buffering in the compositor. Why not just do the compositing every frame as was intended under Wayland? Because reasons.
> Challenge: Come up with a list of websites that scores above 95 on Google page Speed insights. HN is one. The article linked here is another. Generally, programming-language home pages do well. What about the startup sites under `Show HN`?
My open-source project's home page[1] is currently scoring (for me) 98% on the Google Page Speed insights desktop tab[2]. Given that my project is a canvas library and that page is running 4 large/complex canvas demos, I think the Page Speed result says more about Google's test suite than it does about my page. For instance I know that Lighthouse does not include canvas elements when calculating the Largest Contentful Paint metric which my page should, by rights, score poorly on.
Hi - this gets into "what is the nature of embedded devices". I'm referring to ones that don't use an OS, and are programmed to do exactly what they need to accomplish, perhaps with real-time requirements.
> The amount of complexity I'm willing to tolerate is proportional to the size of the problem being solved.
First, most of the time you don't even understand the problem well enough to know its size.
And second, complexity quite naturally increases with (at least) the square of the problem size because not only are there more things, each of them can potentially interact with every single other one. With great skill and effort you can reduce that exponent below 2, but actually reaching 1 is pretty much impossible.
Thus software is going to remain intolerable to Ryan for the foreseeable future.
This argument always strikes me as a cop out. Software is bad due to historical accidents, cultural factors, and short term economic incentives, not irreducible mathematical complexity.
Depends how you look at it. Reconciling input from a hundred different stakeholders all of whom have a slightly different mental model and worldview can definitely lead to mathematical complexity. Is it irreducible? It's sort of a philosophical question as you could keep hammering away at them to build a shared understanding and pare things away at the edges, and you could spend infinite time at this, but in the end you might reach a point where stakeholders would prefer it to fail than to make another concession.
Not quite sure. Nobody says that rocket science is unnecessarily complex, and that it should be solved with simple math.
I am waiting for the application of the proposed solutions for software complexity. I have been for a long time. Nobody really came up with anything, even though many have complained about it.
So far I haven't even seen anyone able to define what complexity is.
We just have code smells (like the code makes your tummy hurt?) and we've got "best practices" with zero explanation where they came from or why they're supposed to work.
I think this is the sort of thing that enters the collective consciousness and then gets generalized, but code smells were originally pretty specific anti-patterns that had specific fixes. All from this book if I recall correctly
Cool. I've noticed that general program complexity seems to be a topic that everyone acknowledges, but nobody tries to frame objectively. Except for a few people at the fringes.
I'm not sure that I understand and/or agree with this link. However, I've written and talked about this topic in the past and the response that I've gotten is usually some variation of, "Cool, but I'm not sure that I get it."
It feels kind of like the blind men describing the elephant. People are grasping at something that's probably real, but it's so big that it's going to take a long time before we finally manage to pin down anything concrete. The effort has to start somewhere though.
I'm not sure any attempt I've seen covers everything, but I think it's interesting that people keep trying to describe something that looks suspiciously similar.
Software complexity is how long it takes an engineer to fully understand the form and function of a piece of code, and all the way the ways that code interacts with the rest of the universe.
This definition means that complexity is completely dependent on the individual. And also probably on how distracted they are and how much sleep they had and how much coffee they've consumed.
I can't accept a definition that makes code complexity go up if the engineer looking at it had to stay up late with a sick toddler. Or that makes a bunch of spaghetti obfuscated code go down because the engineer who obfuscated it put in this one neat trick that only they know about.
Also "the ways that code interacts with the rest of the universe" pretty much pegs all software at a complexity of infinity (got to take into account those gamma rays flipping bits).
Not particularly useful beyond selling an Excell spreadsheet to management that's going to give an "accurate estimation" because this time is going to be different.
> This definition means that complexity is completely dependent on the individual. ... I can't accept ...
Isn't this exactly how it works though? Many things look hopelessly complex until you gain additional knowledge about adjacent concepts.
Consider how an elegant piece of Haskell code looks to an outsider who isn't yet proficient with the type system. Consider how difficult it is to make sense of call/cc the first time you encounter the concept of continuations, even though the underlying principle is incredibly simple. Consider how apparently difficult it is for many seasoned developers to adjust to the Rust borrow checker.
It's almost like you need a "frame of reference" for judging complexity, analogous to physical systems. The velocity of an object is entirely dependent on the observer ...
I don't think so. To be sure, familiarly can make things easier. So APL looks complex until you get used to it.
However, intercal is designed to be purposefully incomprehensible. You can make it (objectively) incrementally less complex by removing the "please" rule, for example. Similarly you can make it objectively more complex (in a way that nobody can get used to) by randomly choosing a number at compile time and then requiring every file to have a "please" on that line.
Familiarity can make complexity easier to deal with but there exists objective complexity that (all things being equal) will make things easier to deal with if it were addressed.
You acknowledge that APL looks complex until you get used to it. So then how can you objectively score the complexity of code written in it? Won't the precise numerical value inevitably depend on the internal state of the individual reading the code? Ignoring this inconvenient fact would seem to lead to fairly useless metrics.
I can see where the notion of scoring overall complexity can be a useful cognitive tool when working alongside people with similar backgrounds to yourself but I don't think there's anything objective about it at all. In fact, I don't think that an objective measure can exist in the general sense.
Note that all I'm arguing here is that any useful software complexity metric will inevitably exhibit a dependence on the individual. I agree that intentional obfuscation adds complexity in an objective sense but would argue that there's no objective way to quantify how much it adds and that any such score will inevitably vary between observers.
I think Code Golf is a relevant example here. Programs of minimal size whose textual representation often looks almost like line noise. They are simple and elegant in the sense that they are small; the meaning per character has been maximized to the best of the author's ability. However, they are also highly complex in the sense that the typical human will find them incredibly difficult to make sense of.
> an engineer to fully understand the form and function of a piece of code
Which is quite impossible. This kind of understanding is a bunch of mental levers, trying to Nicolas Cage every possible scenario to a piece of code.
That's why we have ideas such as clean code and TDD - to ensure that we don't have to fully understand the form and function of a piece of code. Instead, we can be humans, worry about the general design, and allow machines to validate it.
I didn't say "irreducible" at all - just that software naturally gravitates towards nonessential complexity. Avoiding that takes skill, hard work, constant vigilance, and the humility, willingness and ability to go back and redo things. Because it's easy to get it wrong and overengineer something in the name of managing complexity, or because you didn't know or misunderstood something about the problem domain. There's your "historical accidents".
And "short term economic incentives" is just a dismissive way to say that having working software now is a hell of a lot more useful than perfect software at an unknowable time in the future.
There's a deep principle, I think, that required backwards compatibility eliminates the possibility of needed reform.
One could imagine a "bloat free" OS that eliminates the overhead of the paging MMU and has rationalized APIs and does everything differently.
But you need a userspace and you're going to want to run (say) vi as a text editor and you need a shell so you get zsh to run, and you want "grep" and "awk", etc.
So you have to emulate the old API and in the process of doing that all the old bloat goes back in...
The same thing happened with BeOS many years ago. Initially their programming model was a thing of beauty. Then the POSIX layer arrived so that people could do exactly as you described. Some things should remain different.
My interpretation of the above comment isn't that software will always be "bad", more that software will never be perfect. If software were perfect, then it would scale 1:1 with the complexity of the problem at hand (no matter how you define it). No human is perfect; humans write software; therefore software will be imperfect.
There is intrinsic irreducible complexity that comes with many problems and there's no way to fix that. Software solutions also cannot scale linearly with the complexity of the problem.
In fact, it's provably outright impossible to even calculate the complexity of a problem in general, let alone find the optimal (as in least complex) solution (see Komolgorov complexity).
Misanthropes generally resort to this world view with respect to humanity because we (outing myself) believe human nature always reveals itself (eg - line up for toilet paper at the beginning of the pandemic).
We are resigned to the feeling that ‘it’s never gonna get fixed’. It’s not a good way to be, and to see this worldview permeate into objective areas of life is even more depressing.
Misanthropy kind of absolves humanity of it’s nature. What I don’t like to see in software is that type of absolution for code we know is obviously not pragmatic. It’s a tough debate because often the perpetrators of complex abstraction layers don’t believe it’s bad (any number of us could have been the guilty party at some point or another):
I think we should still try to be optimistic about openly debating complexity, or else we risk allowing misanthropic perceptions to enter this realm too - ‘people just suck at programming, so be it’.
On the subject of cultural resignation, I'm totally with you. It's not a good way to go about life. Though I don't think it's required to come away from my comment with that conclusion. There's plenty of space between "things will never be perfect" and "things will never be better". Software can definitely be better, it's just important to not worry too much about perfection - it really is the enemy of the good.
I am a naively optimistic stoic nihilist that knows humans can do waaay better than we are, at basically everything.
But one can hold both of the opposing views of what the upscussion is talking about in human nature and misanthropy. Good architecture encourages the right decisions. So we can both accept that human nature will "come out" and also strive to help our fellow humans make the best possible decisions in light of that.
Anything else is not being true to ourselves and the situation.
This notion confuses me, what is the "size" of a problem, is it not exactly the complexity? Or do they mean the importance of the problem in the application context.
I think it's possible for a problem to a broad but not necessarily complex (which I tend to think of depth in this analogy).
Same goes for solutions to said problems. A solution can be large but simple or small but complicated and/or subtle or really anywhere along both of these axis.
I think a good model for software complexity is the crossing number from graph theory. [1] If a problem remains possible to describe using a planar graph then you must consider at most four components simultaneously (as a K4 is the maximal connected graph with a planar embedding).
Years after they became popular, I found an Arduino "starter kit" at a garage sale. Didn't use it because I didn't want even more software complexity in my life. Later I needed to solve a particular problem, so I googled how to program an Arduino. Hmmm. "yum install arduino" and 10 minutes later I had a blinky light and 15 minutes later I had my first actual program running.
I keep seeing diatribes about how "modern" microcontrollers like the RP2040 are so much better and how the Arduino environment is too limiting. Well the Raspi Micro or whatever it is called is dirt cheap, so I ordered a few. And cracked the manuals, and ran the enormously complicated build environment complete with the complex modern Makefiles I won't understand in my lifetime. And got a blinky light. But I'll be darned if I'll learn all that stuff. But that's OK... eventually everything gravitates to the Arduino model again. The software underneath may be a nightmare, but the user interface is simple and ... inviting. This matters.
I'm no idiot technically. I work on some of the most advanced ASICs on the planet at my work (won't get into details on this). I just don't need even more gratuitous complexity in my life. My Arduino projects are not trivial, some of them in the thousands of lines of code category. I still like the simple, clean environment of it all.
I had to add a feature to a homemade photo repository that required Javascript. Cracked the manual, wrote some first principles Javascript. Oh, how do I do X? By using this gigantically complex framework. Bzzzt. Wrong. I'll do without X then.
I remember when the Arduino became popular, my first reaction was "why do I need all these extra layers, just let me program my ATMEGA328 in good ol' C and call it a day".
I guess it all boils down to finding the right level of abstraction that you're comfortable with.
I largely agree, but I think Arduino is (largely) good because it removed complexity, and not necessarily on the coding side.
E.g. pretty sure you can still just write C for a specific chip with Arduino. I've dropped back to reading an input register directly, either because `digitalRead`/`digitalWrite` is slow, or I want to read multiple inputs in parallel. So maybe here, the layers can be a hindrance, sometimes (but it's certainly more complex).
I also remember the development experience before Arduino. Proprietary compilers/toolchains, awkward programmers and ISPs, and expensive eval boards or the xtal being weird on breadboard. To think AVR chips were some of the easiest to get into. I'll take a Mini/Micro/Teensy + Arduino any day for casual projects, despite some of the extra layers.
Yeah the Arduino libraries are a big help and let you get off the ground quickly. But the point I'm trying to make is that unless you're manually translating assembly instructions into opcodes, you're relying on some layer of "magic" somewhere to make your life easier. I don't really consider it "less complex" so much as "hiding complexity that most mere mortals won't care about".
I have to disagree here. Yes, the Arduino is made for beginners and it is easy (to use). But it is not simple.
Just a week ago I ordered a RPi Pico and made an also easy to use Rust toolchain for it: Compilation, upload and run in one command "cargo run"!
Some advantages I noticed so far:
- The installed binaries (rustc for arm-v6-thumb etc) are much smaller than the Arduino IDE.
- Can be programmed in standard C, Rust, Assembler, or what ever you like!
- Simply add some libs which are not even made for embedded systems and get them running instantly
- Does not require any special uploading driver, just acts as a mass storage device
- Supports a wider range of technical limits (such as logic voltage levels)
- Far more performant while also way cheaper
I can say that I am done with Arduino and for me this is the clear winner.
Maybe your bad experience just boils down to using the official SDK, which I agree is overtly complex and feels like it was just made to barely function.
the RPI picos are wonderful. what you described for a process of uploading the program to the hardware is what they should all be. anything more complicated than that is a no go for me these days.
Exactly. If you need a blinky LED Arduino is fantastic at that. If you need a high-reliability embedded system handling 5 different simultaneous tasks, Arduino is going to be a world of pain. Where in the middle you bother to switch depends on your taste.
I have a couple of Arduino projects that do 5 or more simultaneous things. Temperature sensors busy with a conversion, A/D converters taking readings, timers ticking, serial ports being watched for characters and so on.
You can speak of "hard realtime" but all of this stuff works just fine in a polled fashion, just get through your main loop quickly and don't block on anything. With nothing much to do the main loop cycles 5-10K times per second even on simple AVR328.
And then you have readable, easily maintained code without gnarly interrupt handling, realtime threads and all that.
Need microsecond level hard realtime performance? Then don't do that. Though my superficial reading is that you can partly escape from the Arduino environment to do that too, without having to give up all of it. Need microwatt level power consumption? Then definitely learn to program the bare metal. I haven't had to, yet.
Here's one of my less complex Arduino projects but the only one finished to a "show it off" quality. Realtime? Too limited by the Arduino paradigm? You be the judge..
I have to say, to get practical use out of the Arduino, you have to unlearn the "delay" function. If you never block on the main loop, you can do anything.
Yeah, you can usually get away with super-loops in Arduino (YMMV depending on the libraries you want to use and your latency requirements). It's conceptually simple but creates a lot of complexity in the code you're writing (you basically wind up writing ad-hoc schedulers for each piece of logic you write). For simple tasks (especially with relatively simple control flow), it can be good enough and the ease of writing it can make it the best approach. As the complexity of your system grows it becomes less and less readable and maintainable.
Really depends on your requirements. One important distinction is that hard-realtime doesn't necessarily mean low-latency or low-jitter. It just means that failing to meet a deadline is a failure of the whole system. You are right that absent other demands on the processor, polling can usually offer the lowest-latency, lowest-jitter response from a system.
the majority of the arduino environment has literally nothing to do with the complexity of getting your program onto the device itself.
you can have a very complex environment while still maintaining a very simple upload functionality for the device itself. there isn't any reason it can't be as simple as `dd if=/myprog.elf.bin of=/dev/arduinoserial`
I simply do not give a shit about whatever the latest opinionated framework/tech of the week is. All I've gotten to is the point where what I really care about is "Can I fit all the necessary bits and bobs in my head?" The rest is just branding, mind share, and the cult of complexity.
Example: Yarn
I finally figured out why I hate it. It tries to "help" you optimize your runtime namespace via hoisting. Causing higher level builds to create a distinctly different runtime footprint via hoisting than running each of the lower level builds individually.
What. The. Heck. Who wants to inflict that on a user? I don't use a computer to do things in ways where I have to go through the torment of cataloging every bad assumption at every level. I'll take more pessimum, but easier track the reality of over more optimum, yet opaque any day.
What we need to figure out how to do is to inoculate new developers against ever caring about the latest "opinionated framework/tech of the week" in the first place :(.
> What we need to figure out how to do is to inoculate new developers against ever caring about the latest "opinionated framework/tech of the week"
Capital dictates what tech is popular, not actual popularity or effectiveness. So abolishing Silicon Valley and the global north IP regime is the only strategy that cuts at the root of this problem.
Ultimately, yes, but good luck ending IP imperialism. It would force the global north to actually build what it's consuming instead of extracting rents from intellectual property. That won't happen.
1. most of these things are the "new old thing", and more experience/education/history would dampen enthusiasm for these re-hashes of old ideas
2. alot of people/tech is on a revenue treadmill (especially consultants and companies that want to market the new hotness) so we get bombarded with reinventions of the "flat tire"... its like junkfoods with new packaging NEW FLAVOUR! we keep falling for it...
3. our jobs as coders can be boring and we look for new toys for stimulation and make things interesting again... if only for a little while...
FWIW, I believe those people are often the very amateurs who are drinking the koolaid themselves (and when they aren't they are parasites that are more "fans" of software that developers).
Well, let’s talk about that. These bloggers are taking part in a type of traffic generation that either generates direct revenue, or indirectly generates leads via branding (let’s say they get paid for their reputation).
What actually keeps that business model going? They have to write newer and newer shit about newer and newer shit. That’s the whole game.
Very few of them ever write mea culpas - ‘Hey I pimped something to you guys for awhile, and it turns out that tech really sucked’ or ‘Woah, I’m so sorry about espousing this pattern or framework, it’s not that great’.
That destroys credibility. I don’t really think they are sincere, the same way the media broadcasted everything Trump did to make him popular, followed by broadcasting everything bad he did because he’s the wave they needed to ride.
The insincerity is not even close to being bad in tech compared to politics, influencers, etc, but it’s there.
They probably care because they want to stay employable. We live in a system where you can’t work you are fucked (for most people) so yeah learn React, Redux, NPM, css frameworks etc. Whatever is needed so you can get that pay rise or just stay employed.
Also what are beginners doing who are trying to break into coding? Learning as many frameworks as possible. They want to get that first gig, rung on the ladder and double their salary from their non coding job before.
So you are saying we need to figure out first how to make companies not get infected by the latest fad, so people don't think they need to learn the latest fad to get hired? I guess I was simply under the impression that companies only were developing with fad technology because amateur developers were attracted to the fads and kind of forced the companies to develop with them, in which case if developers didn't find them fun then companies wouldn't use them and they wouldn't be a sadly useful way to get hired...
There's a bit of truth to both, imo. There definitely are companies that use new technologies for management or technical management to sound good, and certainly also to attract employees perceived as being into the latest fad.
This goes too far for me. There's nothing wrong with trying out the tech of the week and learning from it. Just don't go crazy needlessly rewriting old code or adding new frameworks to things all the time.
The best software was written in the 90's - Delphi - everything else is still catching up. All web frameworks are awful, because the browser is awful. There, I said it.
The problem there is that reinventing decades-old technology while believing one can simply shrug off uncounted millions of hours of technical problems and troubleshooting by doing so is in fact extreme hubris. Keeping the existing stack is much more humble.
Ever since I read the Joel on Software article on rewriting [0], I tend to share this opinion. However, there comes a point where it's much better to shuck some or all the layer of the tech stack.
It's inarguable that our current state of hardware and software is a festering pile of legacy layer upon legacy layer. C, x86 and Unix got us far and they were the only tech in their category that could do what they did, but the world has come a long way and they come with essential downsides (in both senses of the word). There will be replacements for each, eventually.
C is finally meeting its match in Rust and certain other contenders, while x86 is sort-of under siege by ARM and RISC-V. It will take a while before it happens, but we will be better off as an industry. We will build back better, simply because we have the benefit of hindsight now.
I agree the end result could very well be better. But, I’d also say it could very well be worse. As a result, it’s risky, and therefore ambitious rather than a humble thing to do.
Perhaps, but we still have the old thing to fall back on for a little while if the new thing is really that bad. This will give it enough time to iron out the bugs.
For example, Linux is in the middle of switching out the display server for something more modern, but not everyone has moved on because the new display server is still missing features and interoperability that the old one had. Everyone will eventually move on from the old one, but it's not like it's automatically overshadowed just because there's a new thing.
I would say Wayland vs X11 is a great example of how a rewrite can go wrong. In theory, Wayland should be better because it's easier to reason about and more restricted, but in practice, X11 being "dumb" (anything goes) is probably what led to its mass adoption to begin with. This mismatch in expectations has led to a falloff of support in what would otherwise be a thriving open source project.
Several of our general purpose platforms that have mass appeal (TTYs, the telephone systems, the Internet, FM radio, heck, even Fax) are built on unopinionated, stupid pipes. Correct me if I'm wrong, but Wayland is very much not a dumb pipe. As a result, it's risky in my book.
It's hilarious that you use "hubris" to describe the old stuff which works but is long in the tooth. Are we actually going to reinvent all the wheels, completely destroy backwards compatibility, and come up with a computing platform that is a pleasure to use and develop for? And who's going to explain to grandma that it's better even if it won't run the bridge game she bought in 1996?
What's the immense hubris? The reinventing the wheel constantly?
I mean rebuilding from the bottom up would be an example of that hubris at work... So I feel like I'm not understanding what you were trying to convey.
Remember back when websites were organized according to a hierarchy and if you wanted to find, say, all the reviews of MP3 players you just clicked on the MP3 player section and saw them all? Now you have to punch terms into a search box and sort through a massive unorganized list of results, only some of which are relevant. If you missed something you'll never know about it.
Finding stuff on any shopping site is hell. Search for bike and get bikes mixed up with helmets, lights and weird stuff. Looking at you Amazon but really no one has got this right
Indexing against an ontology is becoming a rare skill. Popular comprehension has become so hocus-pocus that schools now teach kids that if a book has an index it's non-fiction. Meanwhile, Grammarly assists people in faking literacy. Wiktionary has become the best Scrabble dictionary. Late 19th century people of letters are no doubt turning in their graves.
I believe the thinking is that it's the need for that as an ontological divide, rather than "it's telling a story about a thing that didn't happen". The divide being "heuristics about things it has" rather than "critical analysis of what it is".
Footnotes. I hadn't thought about footnotes. The winner here has to be Jonathon Strange and Mr Norrell, some of whose footnotes are micro stories thrown in as a bonus for attentive readers.
Although Jonathon Strange and Mr Norrell doesn't have an index.
Man I was just thinking about this last night. I recall an assignent in middle school where I had to write a poem with a simile in it (among ither literary devices) I lost points because there was no simile. When I went to the teacher and told her "there is a simile : 'the cat fell in much the same way as a boulder' " (or something to that effect)
I was told similes have to have the word 'like' or 'as' in them. Which is just not true but has become such a common hueristic that even English teachers think its the definition of a simile.
sigh
I totally did that on purposes though cause I wanted to see if she would dock me a point for that. I was such a little prick.
Being a prick to clueless teachers was a perverse joy of mine in my K–12 years. I enjoyed coming up with the most galaxy-brained solutions to assignments.
Same. However I went too far. A philosophy teacher left the class room, and refused to continue teaching my class, after I politely started arguing against her reasoning and showing counter examples. They had to find another teacher to take over the class.
Strict consistent hierarchical classification is ultimately impossible.
It's used in domains where access is based on physical location (as with libraries), though there are numerous alternatives to subject-based grouping used (especially with closed stacks).
If you ever want to see librarians in a heated discussion, tell them of your plans to develop a consistent hierarchical topical classification scheme.
That said: there are useful classifications. None are perfect, serve all needs, please all parties, or are fully consistent. But most are better than a haphazard mess.
1. It's harder than it looks.
1. The hierarchy is stubbornly inconsistent.
Take the example of MP3s, or more generally, audio files vs. text vs. images vs. multimedia.
Do you classify all audio indepednetly of any text, regardless of whether it's, say, a history of WWII in the Czechoslovakia?
Is a historical study of military technology used in Czechoslovaki during WWII filed under history (WWII), geography (Czechoslovakia), technology (military), or military science (technology)?
What happens when countries change or borders shift (Holy Roman Empire -> Austria-Hungary -> Czechoslovakia -> Czechia)?
Or when a region is disputed (West Bank, Taiwan, Kashmir, Oregon Teritory, Malvinas, ...)? Is Ireland part of the British Isles?
Is it "mythology", "fiction", "religion", "theology", "philosophy", "cult", or "mass delusion"?
Even if it's a single person's classification, conflicts rapidly emerge. Increase the number of catologers, or add multiple readers/users, and you'll start getting complaints and exceptionally strong disagreements.
I'm not opposed to the practice. I've attempted it myself numerous times. I'm working on several approaches presently.
Dude I just want content to be preserved in an organized way instead of a stream of consciousness where anything that isn't on the front page disappears into the void forever.
How about tag-based systems as an alternative to a hierarchical one?
The tags can be presented as a hierarchy, if you like. Or better, as several different hierarchies, or perhaps alternatively, as a web or space with multiple entry points. Depending on how you slice it (and where that slice begins and ends) you'll end up with different organisations, though each will have at least some internal consistency.
The hierarchical ontology is a DAG. It's got a root, it propagates from the root to leaf nodes, and there is no recursion.
A fully tag-based system is neither directed nor acyclic. My view is that the better ones appear as DAGs withiin a localised area, but that constraint falls at a broader level.
The main problem with tag-based systems is that they tend strongly to be "folksonomies", where terms are not controlled and there can be multiple variants of spelling, naming, etc., often reflecting both personal, geographic, and temporal preferences / ideosyncracies.
That's where a controlled vocabulary or defined set of relationships / associations (perhaps probabalistic) helps. E.g., "Yerba Buena" -> "San Francisco", "Frisco" -> "San Francisco".
The Library of Congress Classification and Subject Headings make for an interesting study. They're quite extensive and have evolved over more than a century. They have their own inconsistencies, ideosyncracies, and controversies. Part of their strength is the surrounding infrastructure establsihed to support and address these. There are of course other classifications.
How exactly is a tag-based system not a DAG, regardless of whether tags can imply other tags or not?
> It's got a root, it propagates from the root to leaf nodes, and there is no recursion.
A network of instances and tags satisfies the same properties. Do you have any counterexample? If not, I have to dismiss a notion that "tag-based systems" stand in opposition to "hierarchical systems" since every network of objects and tags is a DAG.
It's a mistake to have nostalgia for UNIX. There are many things it didn't do right and has never done right.
For instance signals don't make sense for many of the things they're used for.
I was highly satisfied with X Windows in 1993 compared to Windows 3.1 but it seems Windows has gotten better over time but the Linux desktop has gone in reverse and I can date it to the introduction of KDE and the very idea that Linux has to have a "Desktop Environment" as opposed to just "X Windows". (The very idea that you should allocate enough space for what you're going to draw in the space seems absolutely foreign to everybody involved.)
People decide to install these desktop environments. I love my Linux experience. I have X Windows and the dwm window manager, running on Debian. Everything responds instantly on a 2-core 2 Ghz processor with 4 GB RAM, machines you can get for $100¸ and I can control my windows from the keyboard.
> signals don't make sense for many of the things they're used for
Wut? They're literally the only way to do non-cooperative preemption in userspace. (You can't do that on Windows without a custom kernel driver BTW.)
Meanwhile, if you don't need non-cooperative preemption then don't use fucking signals. Use POSIX shared memory and a pthread mutex. Or use sockets. Whatever.
It isn't the fault of POSIX when someone chooses the wrong tool for the job. That's like complaining that the local hardware store does things wrong because you went and bought a jackhammer when what you really needed was a sledgehammer.
You often need to use signals, unfortunately, and they stink. The only way to spawn multiple processes and being told when each of them die is SIGCHLD. I mean, I suppose you could spawn N threads and call waitpid on each one but it would be so much lovelier if you could poll on an fd to know when a child dies. Maybe pidfd now provides that, I hope it does.
Windows has it right with WaitOnMultipleObjects supporting mutexes, child process deaths, socket and IO reads and timeouts all in the same interface. Even the BSDs have kqueue now. Only Linux is stuck with the alternatives.
Linux has signalfd(): use it! Avoiding the nightmare of having random syscalls interrupted that is impossible to test and almost always results in failures in production. If you can't use signalfd(), spawn all your processes / threads from a thread that does nothing other than handle SIGCHLD. I've been through the hell of doing it other ways, and it is hell to get correct on a codebase with more than a couple of developers on it, as very few developers remember to think about the consequences of a signal being received during every single syscall or library call.
Well fair enough, I stand corrected about "misuse" of signals in certain cases. Nonetheless, non-cooperative preemption is a useful (though oft maligned) tool. The things you describe aren't actually problems with signals but rather other APIs that force you to use them.
I'm curious. While it does add a (very small) bit of complexity, can you not just spawn a single sacrificial thread for receiving SIGCHLD and set dispositions accordingly? Then just use standard concurrency primitives to communicate with that thread in whatever way you'd like.
> (The very idea that you should allocate enough space for what you're going to draw in the space seems absolutely foreign to everybody involved.)
Modern windowing doesn't work like this. It's like saying calculators should allocate enough memory for all the possible numbers they will ever compute.
I get where he’s coming from. I’m guessing everyone here has had a script they wrote for five people balloon into a multi-team project. People wonder why you, the original author, were so lazy with your architecture even though you thought it wasn’t going to survive more than a week.
The problem is that people aren’t willing to start over and rewrite software so you end up trying to bolt on features to an architecture that can’t handle them and the complexity becomes necessary for the software to function. C++ is the poster child for this, where the syntax for pure virtual functions is asinine but necessary because there was no other good way to do it at the time. If they had just rewritten C++ (and called it, I dunno, D for example) then they would avoid the complexity entirely.
I respect Ryan but he comes across as someone ignorant of the realities of real-world SWE.
> The problem is that people aren’t willing to start over and rewrite software
If I had to continuously re-write projects in new languages, I would never get anything done. There are a lot of projects that we all depend on that are pretty much one-person projects, and not everyone has time to learn a new language + ecosystem, then re-write, introduce new bugs, fix those, etc.
Then of course, the rewrite doesn't always have exactly the same interface (API or user interface). So now you foist your new preference on everyone else. Many of them will stay on the old version, thus fragmenting everything even further. Look how long python 2 to 3 took (and is still not 100% complete).
I come from scientific computing, and I have a rant bubbling up about how modern software development is being pushed into science, and probably slowing it down as a result.
I should make a blog post about it, but I guess I would need to make a blog first :)
> I should make a blog post about it, but I guess I would need to make a blog first :)
Jonathan, you can use a google doc, or a gist, or bug in a github project to get your message out. I recommend you should.
I am a failed scientist that programmer in tech. I have helped grad students train up on unit testing, version control, api design, etc. I think modern software development needs to be brought into science so that science is reproducible. Without reproduction, we have nuthing. So I disagree, but I would love to hear what you have to say. Probably that our definitions of modern don't match and that you are drowing in sprint planning and story points, 1 beaker to 3 beakers.
Short version of rant, unrefined and unpolished :)
I basically agree about bringing modern software development into science, and I teach grad students and postdocs all over the country as well. But I'm looking for a better way.
A major problem is constant churn. We used to teach Travis CI. That seems to have fallen out of favor, so now we teach GHA. But what about all those projects still using Travis? What happens if/when Github or GHA fall out of favor?
Same with python packaging. setup.py vs. setup.cfg, pypi vs. conda vs. poetry vs. whatever else. Various ways of specifying requirements. Plus docker on top for some people.
How to build/host docs is yet another thing to figure out.
GH Pages, netlify, or RTD? Depends on the language? Now lots of projects are both C++ and Python, and cross language documentation is hacky at best.
And what about git? Lots of non-science software developers have issues understanding that, what chance do informally-trained scientists have?
And now we are starting to see people wanting to use Julia and Rust. Or to program web apis and web pages (html/css/js).
Each of these issues may not be too bad on its own, but they all add up. When you already know about all these things, you drastically underestimate how difficult they can be to understand for someone new.
Tech companies and startups can keep up with this stuff. I'm not sure scientists' time should be spent doing this.
[PS: Not sure why you think my name is Jonathan, unless I said that somewhere. If I did, I was lying :) ]
I agree with everything you wrote. Software development is a mess.
To git specifically, git is hard for everyone, anyone that says otherwise is either lying or doesn't know what they don't know. I said as much at the software carpentry classes I helped with. The lead scientist trainer didn't really like me saying that, but it is the truth.
Humans have a fixed decision budget, we can only handle so much complexity before our brain turns off. Do we want to spend that solving domain problems or on huge software stacks?
I guess this is where the folks like us could do a better job to make simple, obvious ways to do things. There is a lot of value in standardization and a reduced amount of choice. Hobgoblins and all.
I knew there was a lot of complexity in our tech stacks, trying to bootstrap smart people from basically nothing made it even more apparent. But I don't think the answer is to get scientists out of writing code. Code is the mirror to science as science is the mirror to our understanding. It rigorously and formally, when used correctly, closes the loop.
For all the problems that Python notebooks have, I think they make an excellent way to convey ideas, but they need history and a way to make them reproducible.
Apologies for the mistaken identity, I thought you were a different seagull.
> Humans have a fixed decision budget, we can only handle so much complexity before our brain turns off. Do we want to spend that solving domain problems or on huge software stacks?
Bingo.
Sounds like we are fairly in agreement. Scientists sometimes have to write code, there's now way around it. But I wish there was more help for all the ancillary stuff.
I want scientists working on their simulation, not debugging why their docs build pipeline fails once in a while, or have to investigate some warnings about needing to upgrade their chosen distro in the CI pipeline.
> There is a lot of value in standardization and a reduced amount of choice.
The only issue here is today's standard is tomorrow's "old-fashioned way" - and I'm only half-serious about "tomorrow". So even if they get everything set up the standard way, they have to know enough at some point to migrate it. I still don't have a good answer for that.
What do you think about having a versioned "science stack" for various disciplines? Genomics, Neuroscience, Astronomy, Condensed Matter, Materials Science, etc. It would be somewhat prescriptive, but if you stay on the ~~rails~~ (not a put, not intended), happy path, then a lot of your needs and choices are made for you.
Everything is containerized. All the batteries are included. Each commit also records all the tests to assists in any upgrades that might occur over the life of the project.
The stacks would be versioned, but also include a mechanism to update individual components w/o breaking the whole system. The root-node containers would have source, and one should be able to "rebuild world", so we aren't relying on cached packages from a distro that is long dead. I am trying to revive some software that was last installable on Debian 4.
The whole workflow's ultimate product is a paper and an executable notebook in a way that is easily reproducible by anyone. The full software stack is rebuildable from scratch using only source (like a FreeBSD build world). One should be able to take a paper from an archive 50 years from now and recreate a bit for bit copy using one command.
I had an idea a little while ago: constantly rewriting code. It was inspired by the idea of doing something repeatedly allows you to improve. So constantly rewriting the same bit of code (with changes) would allow you to get better at writing that sort of code (or solving that sort of problem). Every time you'd end up doing it slightly differently and learning something.
I also came up with a different idea along the same lines while writing this comment. Weakening it to rewriting code regularly (instead of constantly) and adding a (rough) time limit. A reasonable time limit could be anywhere from a day to a week. I think it'd work like this: you'd build a project for the first time (in a few days) and use it for things. Later when you find a problem with it, you'd return and rewrite the whole thing (again in just a few days). I think the combination of rewriting from scratch and a time limit would limit the complexity of each project. One thing I haven't fleshed out yet is how to limit the depth/complexity of dependency chains.
Software is incredible. Here we are, thousands of people across the globe, having thoughtful and interesting conversations, often on up-to-the minute information. The sum-total of human knowledge is available at our finger tips. We can fly safely around the planet.
OMG, this list would go on forever if we sung the praises of software.
It's complex because it's doing complex things, that no civilization has done before -- That no civilization has ever even contemplated before.
But, yeah, OK, the glib object model is complicated. I'm sure you would have designed it better, and simpler. But you know what? You didn't.
if you add unnecessary hierarchies in your code directories
That's one of the things I've noticed a lot especially with open-source projects on GitHub, and it absolutely does irritate me when what should really be a single-file or maybe at most 2-3 file project turns into a dozen files of little content spread out across a maze of deeply-nested directories. I wonder what compels people to do that --- is it a fetish for organisation, attempting to emulate other projects, or trying to make something look more complex and "impressive" than it really is?
I've heard it said that expert code looks like beginner code, except simpler and more correct.
It's enforced in Java that 1 class = 1 file. And it's highly encouraged in C#. Also encouraged in react with components. So what you end up with classes that are too big and files that are too small.
This goes hand in hand with a lot of editors (vs code, sublime text, atom) having the ctrl-p shortcut to find files.
Every source file in C corresponds to an object file, and the linker can only exclude unused code if the entire object file is unused. So if you want to optimize your library to for size when linking it statically, it needs to be split up into many source files.
The opposite problem, that the compiler can't optimize over object file boundaries, was the impetus behind Link Time Optimization (LTO). But that could also be handled by pasting all the source into one giant file. See SQLite's amalgamation for one example: https://www.sqlite.org/amalgamation.html
> The opposite problem, that the compiler can't optimize over object file boundaries, was the impetus behind Link Time Optimization (LTO). But that could also be handled by pasting all the source into one giant file. See SQLite's amalgamation for one example: https://www.sqlite.org/amalgamation.html
Unity / jumbo builds are not that uncommon but they are not exactly the same as LTO:
1. You no longer have compiler-enforced private intefaces (ie static in C, anonymous namespaces in C++) as everything is a compiltion unit
2. As a consequence of 1. all names must be globally unique. This is less of a problem for new projects, but means you can't just turn on unity builds like you can with LTO.
3. With LTO, parts of the compilation (parsing and initial local optimizations) can be done for each source file and then cached so they don't need to be re-done when another source file changes, allowing faster incremental builds. On the other hand, unity builds allow the compiler to only parse each header once (and for C++ only instantiate templates once for identical parameters), which can significantly speed up (single-core) full builds.
4. Compilers traditionally were mostly single-threader and relied on compiling multiple files in parallel to make use of multi-core CPUs. This approach works with neither LTO nor jumbo builds but with LTO, compiler writers have been working on parallelizing the optimization process itself. The same could have been done with jumbo builds but if you are going to involve compiler modifications anyway, might as well build LTO to avoid 1. and 2.
LTO with a linker plugin (or unity builds with -fwhole-program or equivalent) also covers your size optimization as the compiler can now see which functions go unused.
It’a funny that’s the first (and often simplest and most gratuitous) form of abstraction. What does compel people to start here? It’s like when you read about serial killers, the first thing you find out is that they started with torturing animals.
I almost want to make a simple programming interview test (perhaps a Rorschach test) where the task is to take something simple and create a directory architecture - the more files the better.
I’d only hire the person that doesn’t do the task. If you see dozens of files when the picture is presented to you, then I’m worried.
I had a fun conversation with my brother the other day. He was working on an R script (given to him by his lab) to perform some data collection. The script was no longer usable and he had to rewrite some parts. He was annoyed.
Most of the time, software is written until it works just well enough to be just reliable enough to deliver value. And then abandoned. Engineers move on to solving the next problem using that previous bit.
We only go back and edit that previous bit once we get to a point that the new thing becomes limited by that previous bit. That is to say, once we encounter a fundamental issue of the old bit.
We can't write 'perfect' code today to build on in the future. We have no way of knowing what will be built on top until it's built! At which point we discover the fatal flaws. And then go back and rewrite. Or design a new language/framework/tool etc that avoids those issues. So of course this is the pattern.
Separately, engineer's have become end consumers for software. And that software is produced by other engineers, who, are also using software.
End consumers may not care about how software is made (though some certainly do, especially once they notice bugs, degraded performance etc caused by technology decisions etc - see 1Password's plan to switch to Electron).
But what's the alternative? How could we otherwise avoid this and all the other problems Ryan points out. What is the alternative? Endless rewrites of existing layers with no progress made?
Editor configurations, file structure, variable names etc don't affect end users. But they help engineers who help end users.
Lots of progress is made to 'simplify' processes. But of course, those simplifications create their own new things that have to be learned, along with their own issues. And those new things are usually built on those old bits.
It's easy to download a docker image today and with a single command spin up a React project, running its own databases, and express server and not have to worry about configuring paths, installing dependencies (the right way), configuring ports etc.
But now we have to know about Docker. And Docker itself is built on those old bits.
If there's a better solution (besides the pedantic write perfect code) I don't see it. But I'm all ears.
Every problem has a baseline level of complexity beyond which it cannot be simplified further. Thus by simplifying one aspect of the system, you necessarily increase complexity in some other aspect.
This shows in spades in Computer Science. At the end of the day, you have an address space. The things you can populate that address space with are either instructions or data. You are populating that address space with the right patterns to get useful value out of the underlying machine.
Everything above that. Abstractions. Languages. Tooling. Mental models. Are best collected (in my opinion) as tools for a toolbox (filling out your workspace horizontally), but not piled up too high on top of each other. Once you start piling abstraction on abdtraction, it gets whack. Real fast.
Complexity of adding cutesy icons to our build outputs, configuring WMs and IntelliJ/vim is entirely external to the end product of our work. Those are purely the tools we use to view and manipulate our actual work, the source code. It's harmless to the end user.
It's also optional, unlike the very real criticism about the complexity of the rest of the ecosystem. What the author wrote about things like Boost, DBus, autoconf etc are exactly the things that threw me off from the C ecosystem and into ecosystems that have their act together to some extent, like C#, Java and Rust.
> We collectively decided that Unicode is a thing now (including terminal emulator support), and that emoji are a part of it.
That sounds like democracy applied to the IT world, and we all know that "democracy is the least worst system". Democracy doesn't necessarily imply rational thinking;
e.g., we are building websites using PHP and WordPress, not Lisp (well, except HN, perhaps).
My understanding (probably erroneous) of the article is: we, as developers, are condemned to use not-so-well-designed software because other developers don't know better and keep a) building bad software, b) using bad software.
As usual, the majority is the driving force; the question is: are you part of the majority or are you like Ryan? As of me: I'm like Ryan only on Mondays, mostly.
Sorry, I edited my comment. I realised that wasn't what I quite wanted to say.
But to respond to your comment, I treat Unicode as a special case, because it is the result of a reform of text encoding from the ground up. Before Unicode, we had proprietary character sets and codepages, mostly mutually unintelligible.
Unicode brought order, interoperability, and the ability to represent in digital writing whatever might conceivably need such a representation, now or in a hundred years. It's forwards-extensible, so future cultures and alphabets will be also well-served.
I think it's just complex enough for the size of the problem, so it's actually a counter-example of the contrived complexity represented by PHP and WordPress.
U+0007 represents the character for ‘make the bell on the teletypewriter ring.’ This isn’t practically used by hardly anyone today, but it remains a part of the Unicode standard and probably will forever. I don’t want to disparage Unicode, because it does provide a good solution to the problem. But it has its own issues and its own degree of complexity.
This issue is something I’ve been thinking about lately since since beginning to use Gemini. Gemini is a minimalistic file transfer protocol, like HTTP but for transferring plaintext. But it builds on top of Unicode, of course, and so it inherits all of Unicode’s complexity. Suddenly, the most difficult part for Gemini client authors is not implementing the protocol itself, but correctly handling the complexity of Unicode text rendering. (E.g. https://gmi.skyjake.fi/gemlog/2021-07_lagrange-1.6.gmi)
Obviously, I’m not proposing replacing Unicode. But Unicode is complex, and that complexity ends up being inherited by all projects built on top of it. That’s an example of complexity that’s super easy to miss unless you’re looking for it. Which I think is the original author’s point. Unless you actively try to decrease complexity, complexity only increases.
U+0007 represents the character for ‘make the bell on the teletypewriter ring.’ This isn’t practically used by hardly anyone today
Sure it is. In some (many?) terminal emulators it makes the screen flash.
Some programs use it to signal a significant error has occurred that might be noticed amongst all the blather that command line programs barf of these days.
A better example might be country flags. When the political situation changes or the flag is redesigned, how should Unicode and the various fonts handle this? Presumably text written in the past shouldn't change for viewers in the future.
Unicode mixes abstract concepts and concrete representations together on a number of levels. It seems like a legitimately difficult problem to me though. An 'a' character might be written differently but remains fundamentally the same thing since ancient times. Meanwhile the spelling and meaning of words shifts over time. So how are you supposed to handle pictograms that can change simultaneously in both commonly accepted representation and fundamental meaning?
It's also ... weird ... that a TTY bell has a dedicated not-glyph in a writing system, isn't it? Colors are (appropriately, imo) handled via escape codes. I certainly don't want support for HTML flashing text to be added to Unicode! I do appreciate the historical context that led to the current situation though.
Unicode treats country flags very well because it's controversial topic. They don't define what flags is available, but ISO country code, font vendor, and OS vendor (or emoji picker vendor) defines. As a fallback, we can see country code instead of flag.
I'm aware, and don't think I'd agree that flags (or emoji more generally) were handled well at all. I generally think that such things should be handled via an escape code that invokes some other (higher level) standard, the same as colored or bold font in the terminal. It would arguably make much more sense to encode flags (and emoji-like things in general) as some sort of compressed SVG representation. At least then (among other things) I wouldn't have to wonder how things would look to message recipients or someone reading old messages in the future.
>Gemini is a minimalistic file transfer protocol, like HTTP but for transferring plaintext
It is my understanding that Gemini allowed you to transfer any file type and came with one simple file type defined. Is this an incorrect understanding?
No, you’re correct. I was oversimplifying in an attempt to ensure my comment didn’t devolve into a description of Gemini. For people interested in learning more, the Gemini project page is very readable.
Don't the other languages add additional complexity by folding such things in though? If the core system is simple then you will inevitably want to bring in various modules in order to simplify the particular task you're working on right then. Integrating those modules adds its own complexity.
Meanwhile, the complexity to support a more advanced core system has to show up somewhere. Someone has to do the complex tasks at some point. For example, in the case of Java even the simplest "hello world" program requires an absolutely enormous runtime environment including the infrastructure for an entire VM.
My solution to this problem (for personal goals not collective) is spend time and build tools, abstractions and interfaces that make myself happy, to a point where when I'm building actual content I only interface with interfaces designed by myself.
What about hating having to understand the complexity of how this post moved away from Google Plus due to various corporate and other forces? Here is the original link pointing to G+: https://news.ycombinator.com/item?id=3055154
This is me. I've gone all the way from low level C programming to frontend. There is so much arcane information to keep in mind to keep yourself productive. And today's companies require you to handle politics as well, under the guise of ownership.
Amazingly, software engineers just refuse to standardize and move ahead. Instead, reinventing the wheel is far too common. Reinvention is necessary but sometimes, good enough is good enough. Most engineering disciplines work on the standardization principle, which improves productivity and predictability.
That said, companies and customers don't care about the underlying software, so I'm not going to waste my time on arcane knowledge either.
"I hate almost all software... That I didn't write... For my particular needs... Without deadlines or managers or diverse opinions... Or any attempt to imagine how to make it more useful to others."
> The only thing that matters in software is the experience of the user.
I'm sure the users of Horizon had a great experience with it while they figured out who to fire. Those who were unjustly let go may have also wanted to matter to the software creators!
No, your point is valid. I agree that the user experience is not the only factor. But I just wanted to highlight the real impact of the Horizon IT disaster on individuals.
If this problem was universally solvable, someone would have done it and made millions.
If you want a flawless experience, you will have to pay for it. Big time.
Oh, and the title should read "I have a love-hate relationship with software", since apparently they still use different types of complicated software.
And that's from a developer point of view. As a user, most software is slow given that it's running on a blindingly fast supercomputer, less intuitive than it should be and just gets in the way a lot of times.
Ironically iOS safari doing unwanted forward and back navigation as I try to read the comments, and when I return to the page scroll position is lost. Ironic!
I know everyone thinks they like simplicity but my observation is that about 95% of software developers have a complexity fetish. Literally any opportunity they get to introduce something over and above what is essential to the job they are doing they will do it. For many its almost built in as a part of their value system (if all I do is solve the basic problem then I'm just a mediocre mundane developer right? I want to be a rock star who develops frameworks and invents new things that other people use). And yet the epitome of genius in my mind is finding the solution that solves the problem without introducing anything at all.
Again, I know 80% of the people reading my comments right here will nod their heads and identify with it and yet still go away and see themselves as exceptions to the rule and create another abstract class or define an optional configuration file or make their code automatically parse an extra environment variable with a regular expression or .... you get it....
We had to add something to the database the other day. Big argument. Should be one to many? many to many? what if this or that happens? what if requirements change? You know what - for the requirement wa actually had it was solvable with a single integer field on an existing table. But this is a battle that has to be fought ... over and over and over ...