There's just a generalized lack of understanding about web technology among web developers. I don't think many web developers even pay attention to browser features until they are popularized by some new framework.
Frontend developers are bad, but backend developers are guilty too. While frontend developers are breaking scrolling and the back button, backend developers are busy breaking authentication, security, and caching.
I think the only solution is to form some kind of developer's association dedicated to training and best practices. I hesitate to say "union," because the problem of over-under-engineering could be our fault, not necessarily our employers' (although pressuring new grads to deliver code at the same pace as their peers may contribute). It would be great if there were more bootcamps that just trained frontend devs on the latest HTML, CSS, and plain Javascript before WASM and React abstract the frontend away entirely.
> What's the point when your business is required to support IE9.
EOL for IE9 was almost 2 years ago.
> Nobody would go there because that's not where the money's at.
That's exactly my point. At some point we need to find a way to guide the indsutry back in the direction of technical sanity. If not bootcamps, then it has to be developers I guess.
And a lot of the firms that use it also tend to be large and have very deep pockets. You pretty much have to cater to their every need, because they pay well.
It is much easier and less expensive to pay a developer to keep writing those workarounds than to recertify all of the internal applications to a modern version of IE. Furthermore there is a significant time cost to the testing and deployment of a new browser to all machines. These two times and costs are often several times if not an order of magnitude or more the developer cost.
You're applying logic to a software issue at BigCorp, that assumption is, sadly, wrong.
A lot of the time it's some kind of support contract though. Somebody at some point in time had a brain fart and signed a contract for a Website running in IE9 for the next 20 years and now they will have to support that, cost it what it might.
- Academia (think: old internal course-registration tools or professor-used CMSes)
- Education
...are not valuable customers? Because those sectors, and many others, are famous for using old versions of software. IE9? Hell, some giant HealthTech companies mandate IE 6 for their customers with no signs of changing. All of those are areas where there is lots of money to be made.
There valuable customers in software that aren't only young, tech-literate people clicking on ads in a modern browser. Get outside the bubble.
That completely depends on the market you're in. Many governments are still on old versions of browsers and such, and they're also the only type of customer for our products.
>training and best practices <...> the problem of over-under-engineering
That’s what you have to have once you throw a bunch of tools, planks and screws at first graders and let them go. In my 20s I used few local enterprise solutions to help local micro-businesses do their accounting. The base system was obviously dumb and too restrictive to create “apps (tm)”, but it did its job very well. One specific thing about it was that it was infinitely hard (read: know C++/COM and be able to investigate platform issues) to create extensions for it and/or manipulate basic blocks like you can do with web trinity. The result is that no one went that long route of dominating the market with shady solutions that have value from non-tech faces, but are technically crap.
Seeing what some devs could do even in these conditions, I can tell that it was a good state of things for the tasks that platform meant to solve.
I may seem jump on topics on this one, but to not write a poem, look: Perl’s tmtowtdi is highly criticized, but what web allows is a “there are infinite ways to do it” paradigm with no basic blocks or “best practices” like you name it.
Summed up: if you’re writing apps that do edit objects, there is no room for custom element offsets, scroll handlers, sql statements (except select for few cases) and knowledge of http cookies, headers, urls, etc. If you can’t define a datascheme with two clicks, draw a form layout by hand and write final logic after 5 minutes, deploying in half a hour, without caring what platforms/databases/browsers/ languages/frameworks/backups/ maintenance/ides do you choose, then there is no room for business or app logic in your head.
>the latest HTML, CSS, and plain Javascript
Today it is like chasing specifics of plain 80386 assembly — the knowledge that ages faster than you can read on it from zero to top professional experience. If things didn’t get better in a decade, then it is a dead end. Plank, screw and hammer problem never gets old.
Excellent comment. The way we program for the web is ridiculous currently: the abstraction level is way too low.
> Today it is like chasing specifics of plain 80386 assembly
I suspect the fetishization of the unholy Web trinity (HTML/CSS/JS) is impacting the ability of devs to imagine a better workflow. Case in point: Electron. Rather than take some of the lessons learned from the web stack, such as immediate mode UI being a decent default, they just crammed _all_ of it into a Chromium shell and declared it The New Way To Write Desktop Apps.
If the web is your first platform you've spent a lot of time on, then what I'm writing will probably just sound grumpy. And part of it is! But, on the other hand, when something new comes along, it may render all of your intimate JS knowledge useless because it isn't JS, and you need to be able to let that knowledge go. Even if the web as a platform sticks around, history shows that whatever comes next might be less powerful than using raw HTML/CSS/JS, but it doesn't really matter for getting most things done.
Whatever comes next may not afford all the little nitty-gritty things you can do via HTML/CSS/JS. Imagine a VB for the web that hides that stuff in a better conceptual model: you lose the ability to do some things (potentially) but get the ability to do other things more easily.
We need more distinct platforms that have specific area experience and engineering behind these. "Web" is driven by advertisement specialists and vendors, what else to say.
I think yes, a couple for business, a couple for games, and so on. One for landings and news, i.e. web. Hint: not a platform itself is important, but sort of regular professional committee behind it. E.g. there is no committee now for even basic UI. Ubuntu has it, Mac/iOS has, even Windows has native look and feel to lesser but noticable extent. Web simply doesn’t. Open Windows SDK for example and feel how many areas are not yet covered and never will with current approach.
Chances are I didn’t understand your second question, but if taken literally: I can’t see how usage control may help. Anyone can use anything at their legal will.
That's not a bad idea, although I don't think you could stop cross-pollination of platforms leading to redundancy and bloat.
Perhaps if there was a single (large, but well-organized) team responsible for all platforms, that could correctly identify which features were needed where.
I don't know if I'm making this up, but doesn't overengineering classically refer to the engineering practice of having wide safety margins or "unnecessarily" high performance?
Like, making the bridge twice as strong as it should need to be, or making the car faster than it's wise to go.
That's different from typical software overengineering, which usually doesn't make the product measurably better, just more complicated.
I think it does typically mean that in traditional engineering. There's a disconnect because engineering isn't the greatest analogy for software development.
I think even in software engineering, overengineering can be measurable. For example, let's say an API has an endpoint that responds on average within 10 ms, but it is only called once per hour. Spending resources to decrease that response time, if the latency can be afforded, would be overengineering (or maybe just premature optimization), but you could measure the increase in performance. Also side note, overengineering hardware is often good practice. In the examples you gave (i.e. a bridge), a 2x "factor of safety" could be considered low.
A bridge built to be twice as strong is measurably better in the sense that it is stronger, yet no better in the sense that it doesn’t achieve its purpose any better. Yet, to make it stronger will involve complicating the design (having a bridge that is twice as strong out of coincidence is just luck - not over engineering).
Comparatively, reimplementing form fields is better in the sense that the experience is more customizable. However, it is no better in the sense that it doesn’t actually help the user fill out a form any easier. Again, the over engineering adds complexity.
In both cases, the added complexity has costs which would have been best avoided altogether.
Speaking of bridges, you are IMHO a bit too extreme, a "double strength" bridge is definitely bad-engineering.
Over-engineering is (as an example) using a modified asphalt to pave the bridge (very good for adherence when raining and on cold, snowy places) in a warm climate country where it rarely rains, and never snows.
The good old definition of (construction) engineering norms is (maybe was) revolving around representing the reasonable cost for the society of reasonable levels of "safety" and "convenience" (both rather difficult to establish BTW).
When it comes to computing (and similar) the problem is often what I call the "because I can" syndrome, on the web there is an over-complication (maybe more accurate than over-engineering) on an increasing number of sites, which as an example if you have poor connection or not the latest OS and browser, reduce the number of visitors (or the quality of their experience) and this is the "under-engineering", as the site (while possibly offering a superb experience to someone accessing it with a given connection/Os/browser) will be inaccessible or only partially working to a not-so-trifling number of potential customers, so the site does not deliver fully what is supposed to, so maybe more accurately, it is over-complication that makes the result be under-performing.
I think that what you're reaching for is wrongly engineered. Some of the big frameworks go through huge effort for non (or small requirements). I continue to be amazed at the nonsense about hot-loading and time-travel debugging that people think are framework requirements - while requirements like good bundling or actual debugger support which doesn't blow up on trivial examples go unfixed for years. It's those framework non-requirements which fuel the idea that the frameworks create bloat.
It's all about minimizing the effect of the non-requirements, careful choice of a minimal set of sensible enough frameworks, and then rigorous agile prioritization at the application end.
I've adapted to this team anti-pattern by becoming relentlessly skeptical and cynical in my work. I don't let anyone tell me something is "trivial" or "easy" in programming unless I've actually seen it (meaning, with my own eyes, not reading about it on a blog), and protest untested ideas from coworkers vigorously, even if that person is a developer on my team whom I respect. I kind of hate this solution because it makes me automatically resistant to anything new whatsoever, but I hate it less than some of the code I've been forced to write to fill in gaps left by some flavor-of-the-month tech.
It's a good attitude as long as it doesn't turn you into a old coot stuck to his vb6. It's good to always be skeptic, but don't let that stop you trying new things.
> To pick a specific example: the problem with an over-engineered form is that the amount of code required to replace no engineering (i.e. native form controls with basic styling) is enormous and almost always only partially successful (i.e. under-engineered).
> They are under-engineered because they are over-engineered—tried to replace native controls.
> This is the basic argument against most forms of website over-engineering: scroll-jacking, manipulating history with pushState, completely customised form controls, etc.
> Recreating browser features will almost always be both under- and over-engineered.
The worst thing about reimplementing native controls (and what everyone misses) is that those native controls don't go away. Instead, their behavior becomes undefined as the system grows around the reimplementation.
To me it's simple. A lot of web developers pretentiously covet complexity because it makes them feel smart to wield it. Plus, they never really learned that engineering is the divination of simplicity from complexity. They don't get over-engineering, because isn't that just more of a good thing?
I think the main problem is that once something works it's marked as "Done" and won't get touched anymore. I often over-engineer massively until I really understand the problem and have something working. But then I try to simplify and delete as much as possible. It seems a lot of people skip the last step.
Not just web developers... lots of developers of all kinds. I’ve seen enough cases in desktop and mobile where a nice simple 30 line function was “refactored” into a 2000 line, 6 class, monster object hierarchy Because Design Patterns and because Reusability.
>To me it's simple. A lot of web developers pretentiously covet complexity because it makes them feel smart to wield it. Plus, they never really learned that engineering is the divination of simplicity from complexity. They don't get over-engineering, because isn't that just more of a good thing?
I think this creeps in on web development in particular because it is so often intensely boring, menial work. When you're a highly educated software engineer that is being paid top dollar to jam out ad-tracking code all day, you end up just making things complex for the fun of it.
That's not entirely wrong, but I think that engineers fall prey to negativity bias [1] in this area. There are tons of really complex tools that we don't hate on because we barely notice that we're using them. They're either things well-engineered enough that the complexity is mostly hidden (think JQuery equivalents), or less well-built things that we learned so early, or use so frequently, that we rarely notice having to deal with their complexity/poor engineering.
jQuery is a nice example. It was a pleasure to use and useful from the start on. Whereas with some of the modern Javascript frameworks you have to create a huge machine before anything can get done.
Read and understand lots and lots of code to the point where you could reimplement it yourself. Eventually you'll be able to see the difference between simple needles buried in artificially complex haystacks, and actual problem-domain-mandated complexity.
If you've read the article, this quote sort of stings in the sense that the article is mostly about labels within frontend development and backend developers are then accused of breaking caching!
There are only two hard things in Computer Science: off by one errors, cache invalidation and naming things. - Phil Karlton
Can some frontend dev explain this? I've had a rough time w/ caching w.r.t. frontends, but my perspective as a backend eng is that I have to fight to get FEs to obey the cache headers I send. Near as I can tell, their preference is to tell their HTTP library to ignore caching entirely, and then implement it themselves, where stuff gets cached for whatever the dev thought it should be. These sorts of implementations are near impossible to invalidate.
> We share custody over the bratty toddler — custom elements.
Now that Apple has declared veto against the `is` attribute, I think the author should be prepared to give up on that claim.
<input is="my-fancy-input" type="text"/>
This will not work. To assign behavior to the input via Custom Elements, your only remaining option is to create a whole new element.
<my-fancy-input/>
This might append a normal `input` element to the Shadow DOM, but still won't make the form accessible or even just behave naturally without substantal over-engineering. Custom Elements have become a cause the major concern identified by the article.
> Recreating browser features will almost always be both under- and over-engineered.
This problem is identified in the Custom Elements specification already [1] so the browser vendors are simply not with us here.
Frontend developers are bad, but backend developers are guilty too. While frontend developers are breaking scrolling and the back button, backend developers are busy breaking authentication, security, and caching.
I think the only solution is to form some kind of developer's association dedicated to training and best practices. I hesitate to say "union," because the problem of over-under-engineering could be our fault, not necessarily our employers' (although pressuring new grads to deliver code at the same pace as their peers may contribute). It would be great if there were more bootcamps that just trained frontend devs on the latest HTML, CSS, and plain Javascript before WASM and React abstract the frontend away entirely.