I've always felt this problem from the first time I touched Angular. It was just so much more complex and fragile without actually a lot of benefit unless you wanted to make a really interactive async application like Google Docs or Facebook Chat.
When SPA's became the norm and even static web pages needed to be build with React, developing became more and more inefficient. I saw whole teams struggling to build simple applications and wasting months of time, while these used to be developed within a couple of weeks with just 1 or 2 developers, on proven server side opinioated frameworks. But that was no longer according to best practices and industry standards. Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.
I am really happy that common sense is starting to return and more developers are starting to realize that integrated end to end frameworks are very useful for a lot of real life application development scenario's.
When SPA's became the norm and even static web pages needed to be build with React
I'm in a weird situation where I'm contracting into one organisation and they've contracted me out to another. The first organisation know me as a senior dev/architect with 15 years experience in a niche domain. The second organisation see me as brand new to them and despite paying an embarrassing day rate are giving me noddy UI tweaks to do. Extracting myself is proving to be slow somehow.
Anyway, they wanted a webapp with a couple of APIs and nothing on the page than a button, the authenticated username and a line of text. Clicking the button opts you in/out of a service and the text changes depending on the state. The sort of thing people go to once, maybe twice.
I used a mustache template on the server side to populate the values and I didn't even bother with any javascript, just did an old school form submission to the API when the button was clicked and a redirect back to the page.
It was tiny but, obviously it was decided "we should be using a more modern framework" - code for React. It was the more word that got to me, as if there was an equivalent, dated framework I'd used. I didn't put up a fight partly because I was new to the team and figured they were hot on React and I wasn't. Somehow, they made a complete hash of it, they couldn't even figure out how to get all their inline styles (the only styles they used) working without help.
I guess it's just those classics; people want to learn the hot new thing as they see it, their managers are happy that they've heard a buzzword they recognise and then everything becomes a nail for their new hammer.
It's interesting to contrast this kind of organizational behavior with the type where "management won't give us time to deal with technical debt". Though arguably using an over-complicated framework is creating more technical debt, on a certain perspective, this is on the other end of this scale.
It seems to me what we want is some kind of "Platonic ideal" where the extremes are bad for development:
Management won't give us time to deal with technical debt/incorporate best practices.
|
|
THE IDEAL
|
|
Management wants the Hot New Thing all the time. uservices are the state-of-the-art in best practices so uservices it is!
The best advice (IMO) in dealing with the top end of this spectrum is to frame technical debt and best practices in terms of whatever economic metric the manager cares about (e.g., "TDD will lead to less bugs, ergo happier customers, ergo greater retention"). But I wonder if the same framing can be used to encourage temperance in managers who dwell on the bottom of my spectrum.
I somehow suspect it won't work. The problem with those at the top is that they see the dev's proposition as an investment that will either not bear fruit or, worse, slow the team down; so they are incentivized to keep status quo. Those at the bottom, however, start from a perspective that their idea will add value to the team so they keep pushing for it no matter what. And telling them "Let's not do what Google does; we're not Google" is definitely seen as a devaluation of the team.
This has been a weird headspace to explore. I'd love to hear from others' experience on dealing with this.
IMHO the problem is that with reusing your scale I typically see:
Management that don't really understand tech and is afraid to try things
|
|
THE IDEAL
|
|
Management that don't really understand business (and sometimes tech too) and is focused on tech fashion
Surprise the ideal is hard because it requires both tech and business experience to recognize how best practices and new tech could bring value in a specific context. The job is to make clients happy by solving their problems, with apps that are nice to use, bug free, performant, maintainable, evolvable, with a usually short time to market and obviously for the best cost. Sometimes that equation is solved with a complex stack, architecture and practices with hundreds of engineers, and sometimes with a few web pages, inlined CSS, a bit of vanilla JS, and a solo dev.
Every time that sort of thing has happened to me it's been because there's some grand plan to build out more features that the people on the frontline don't know about. The plan rarely materializes but the idea that the foundation should be built in a way that supports it isn't completely stupid.
It’s not stupid, no, but a “supporting foundation” is a largely just a seductive metaphor. It says, “Clearly software is like a building. Every building needs a solid foundation.” It doesn’t inspire engagement with other metaphors, like considering software to be a tree that must be grown incrementally and as a product of dynamic forces. It doesn’t map knowledge from the building domain to knowledge in the software domain.
Or that, with software, you can always rip out the foundation and replace it. And you're working on it as you work on the rest of the "building" anyway.
The difficulty of working on lower abstraction layers doesn't scale with the amount of higher layers. Unlike with buildings or bridges, there's no gravity in software, no loads and stresses that need to be collected and routed through foundations and into the ground, or balanced out at the core. In software, you can just redo the foundation, and usually it only affects things immediately connected to it.
A set of analogies for software that are better than civil engineering:
- Assembling puzzles.
- Painting.
- Working on a car that's been cut in half through the middle along its symmetry plane.
- Working on buildings and bridges as a Matrix Lord who lives in the 4th dimension.
All these examples share a crucial characteristic also shared by software: your view into it and construction work is being done in a dimension orthogonal to the dimension along which the artifact does its work. You can see and access (and modify, and replace) any part of it at any time.
The real "foundations" of a software system are probably its data structures rather than the infrastructure/backend. It's still an iffy metaphor though for the reasons you've given.
I love this insight, I just recently learned about the hidden HN feature to favorite comments and used it for the first time to favorite your comment. It's always a pleasure to read your comments on HN, I noticed your handle popping up here and there and would like to thank you for your contributions. If you had a collection of all your comments on HN printed in a book I think I would buy it:)
Biggest problem with templating libraries like mustache is they aren’t context aware, so it is up to the programmer to remember the proper way to escape based on where a variable is used.
Honestly when they come with decisions like that, I'd like them to spend some time on a formal writeup - have them prove they understand the problem, the existing solution, the issues with it, and why React or technology X would solve it. Have them explain why they / their employer should spend thousands on changing it.
I mean it'll only take them an hour or two to write it up, better to spend that than the 20 hours it would take you (for example) to spin up a new stack.
Anecdotally I find the opposite to be true. I've been writing frontend code for over a decade, but I've never moved faster and wrote less buggy code than now. Is that because I've become a better developer? Sure, a little bit. But by and large, I don't believe that ultimately is the reason. I think it's the maturity in the technology. My growth as a programmer is hardly linear and the past 5 years have not matched the growth I achieved in my first 5. Frontend tooling has never been better than it is today.
What I believe, is that the bar to build web applications has been lowered, and there are more programmers than today than ever before. You have people who are not experts in frontend development and javascript trying to build complex UIs and applications. So you take this person who doesn't have the requisite experience and put them to work on a paradigm with a lot of depth (frontend) using frameworks that are really simple and easy to get started with, but compound problems as they are misused.
Another factor is that since SPAs are stateful, complexity mounts aggressively. Instead of a refresh on a stateless page every few seconds, one page causes bugs that rear their head for the duration of the session. These inexperienced people are put in charge of designing codebases that don't scale and become spaghetti. But when designed properly, these problems are largely negated.
I'm not advocating that SPAs are the solution to all problems. I think there's gross overuse of SPAs across the industry, but that is not an indictment of SPAs themselves. That is someone choosing the wrong technology to solve the active problem.
With respect to angular (1, I never touched 2) specifically, I always found it extremely overengineered, poorly designed, with terrible APIs. But that's a problem with that specific framework and says nothing at all about SPAs at all.
> Frontend tooling has never been better than it is today.
What's the library or design pattern to consume a REST API in React or any of the mainstream front-end frameworks? The only thing I'm aware of is Ember Data but Ember is apparently not cool anymore, and I couldn't find a suitable replacement.
I'm asking because in all the projects I've been involved with, consuming the backend API always felt like a mess with lots of reinventing the wheel (poorly) and duplication of code. I can't believe in 2020 there's not some kind of library I can call that will give me my backend resources as JSON and transparently handle all the caching, pagination, error handling (translate error responses to exceptions), etc and people have to do all this by hand when calling something like Axios.
In contrast, Django REST Framework handles all that boilerplate for me and allows me to jump right into writing the business logic. It's insane that ~30 lines of code with DRF (https://www.django-rest-framework.org/#example) gives me a way to expose RESTful endpoints for a database model to the web with authentication, pagination, validation, filtering, etc in a reusable way (these are just Python classes after all) but the modern front-end doesn't have the client equivalent of this.
> I'm asking because in all the projects I've been involved with, consuming the backend API always felt like a mess with lots of reinventing the wheel (poorly) and duplication of code. I can't believe in 2020 there's not some kind of library I can call that will give me my backend resources as JSON and transparently handle all the caching, pagination, error handling (translate error responses to exceptions), etc and people have to do all this by hand when calling something like Axios.
If you look at 20 REST API's you'll probably see 30 different patterns for pagination, search/sort, error responses, etc. There have been a couple attempts to standardize REST such as OData but I think it's safe to say that they haven't been very successful. It's kind of challenging to build standard reusable front end tools when everyone builds back ends differently.
You have the concept of data adapters which would be clients for your API (you can make a custom one if extending the existing ones isn't an option) and the rest of the application just interacts with the equivalent of database models without ever having to worry about fetching the data. You could swap the data adapter without having to change the rest of the code.
We seem to have lost this with the move to React though, and even the hodgepodge of libraries doesn't provide a comparable replacement.
Haven’t used it but aren’t there things that can connect to a swagger API spec and do some of the heavy lifting for you? I agree that the network layer in frontend is tedious to implement, things like GraphQL and Apollo attempt raise the abstraction level. What I would really like to see is something even more abstracted, e.g a wrapper around indexdb you can write to that syncs periodically over websockets to your server, more similar to the patterns we use on mobile.
You're right Pouch completely slipped my mind, it's a great solution. But what about something more generic on the backend that wasn't database specfic, some sync engine you could put in front of whatever database you wanted. Can you do something like this with Pouch?
> What's the library or design pattern to consume a REST API in React or any of the mainstream front-end frameworks?
For React, its out of scope; anything that you can use in JS for this can be used. If you are using a state management library, that's probably more relevant to your selection here than React is.
REST is also too open-ended for a complete low-friction solution, but, e.g., if its swagger/openapi, there's tools that will do almost the entire thing for you with little more than the spec.
> The only thing I'm aware of is Ember Data but Ember is apparently not cool anymore, and I couldn't find a suitable replacement.
Ember Data is definitely a valid choice. It may not be hyped right now, but that has little to do with utility or use in the real world.
The graphql frameworks - like Apollo - give you that. I haven't used it. For basic caching the state management frameworks work pretty well, but it is a lot of layers when you add Redux or Vuex to your stack. It works well for us though and I find it much easier to reason about than the old jquery spaghetti code style.
I hear you. I find myself needing to reinvent the wheel far too often to traverse the boundary between the client and server. I also feel that it shouldn’t be this hard. Apollo client and relay solve this problem for GraphQL APIs (quite nicely IMO). What’s missing is an Apollo client for non-GraphQL APIs.
> These inexperienced people are put in charge of designing codebases that don't scale and become spaghetti.
I think this is one area where front end tooling can be painful for the average dev. The bar to writing idiomatic JS for a given framework can get pretty high quickly, especially when you look at some of the really popular tools out there (i.e. redux).
Front end work has become so much harder to grok because the patterns around things like state management still have a lot of warts. The terminology of redux drives me crazy because it’s really difficult to explain things like reducers.
What most people have in mind as "idiomatic JS" isn't that. It's usually meant to refer to some patterns that appeared and started getting popular around 8 years ago. And often, code written in this not-idiomatic way works _against_ the language and/or the underpinnings of the Web in general. It's just that the circles promoting the pseudo-idioms have outsized and seemingly inescapable influence.
The question asking for clarification is itself vague. Examples of which part?
Look at JS that's written for serious applications today, identify the stuff that you'd label as "idiomatic", and then look at code that was written 10 years ago for serious applications, and see if it matches what your conception of "idiomatic JS" is. Good references for the way JS was written for high-quality applications without the negative influence of the new idioms (because they didn't exist yet): the JS implementing Firefox and the JS implementing the Safari Web Inspector.
Examples of how "idiomatic JS" is often written by people who are working against the language instead of with it:
- insistence on overusing triple equals despite the problems that come with it
- similarly, the lengths people go to to treat null and undefined as if they're synonymous
- config parameter hacks and hacks to approximate multiple return values
- `require`, NodeJS modules, and every bundler (a la webpack) written, ever
- `let self = this` and all the effort people go through not to understand `this` in general (and on that note, not strictly pure JS, but notice how often the `self` hack is used for DOM event handlers because people refuse to understand the DOM EventListener interface)
- every time people end up with bloated GC graphs with thousands of unique objects, because they're creating bespoke methods tightly coupled via closure to the objects that they're meant for because lol what are prototypes
These "idioms" essentially all follow the same "maturation" period: 1. A problem is encountered by someone who doesn't have a solid foundation 2. They cross-check notes with other people in the same boat, and the problem is deemed to have occurred because of a problem in the language 3. A pattern is adopted that "solves" this "problem" 4. Now you have N problems
People think of this stuff as "idiomatic JS" because pretty much any package that ends up on NPM is written this way, since they're all created by people who were at the time trying to write code like someone else who was trying to write code like the NodeJS influencers who are considered heroes within that particular cultural bubble, so it ends up being monkey-see-monkey-do almost all the way down.
Hi, I'm also a new JS coder, but I'd like to avoid becoming one of "those people" you're talking about. I've been struggling with exactly what you mention - how to find out the "correct" way to apply patterns/do relatively complex things, but all I get on search results are Medium articles written by bootcamp grads.
Can you recommend any sources of truth/books that can guide down the right path? Of course I'll be going through all the things you mention but I'm just curious if there's somewhere I can get the right information besides just reading through Firefox code, for example.
And Crockford’s JavaScript: The Good Parts. Although and older book, JavaScript fundamentals never change and it describes a lot of those forgotten foundations.
A good starting point to explaining reducers is that you are reducing two things into one.
A redux reducer:
The action and the current state reduce into the new state.
And of course it doesn't matter how many reducers or combined reducers your state uses - they're all ultimately just doing this.
This also works for Array.prototype.reduce(). You're reducing two things into one.
The concept of a reducer isn’t the hard part of Redux... it’s designing your state, organizing actions/reducers/selectors, reducing (no pun intended...) boilerplate, dealing with side effects and asynchrony, etc.
Redux is not idiomatic JavaScript though. It’s trying to make JavaScript immutable and have adt which it doesn’t. If you use elm, reasonml, rescript, etc this pattern is a lot easier to implement than with JavaScript.
I definitely agree you, F.E Tools has gotten a lot more mature and we have a lot further to go as well.
I'm primarily a backend developer and I think in general backend developers makes for "poor fronted devs". I'm talking about those "occasional" times the backend-dev needs to do some f.e dev work. Just because they don't know the tech as well, best practises and spend as much time with it as a dedicated F.E Dev. jQuery code written by the "occasional front-end dev" is kinda horrific in many cases.
Now please internet hear me. I'm not saying you can't write bad code in a JS-Framework. I'm saying it's usually less often and less bad - especially for non-dedicated f.e devs
Like crossing a street, just looking left and right won't guarantee you to be safe in your crossing, but it damn near makes it less probable.
If you are a shop with mostly backend-devs and don't want to invest in a F.E dev, you definitely should look into a js-framework.
*Svelte is always good start very small and bare bones.
> If you are a shop with mostly backend-devs and don't want to invest in a F.E dev, you definitely should look into a js-framework.
That matches in my experience.
I worked in a shop with only backend developers and the frontend was an absolute buggy mess of jQuery on top of bootstrap. After migrating most of it to Vue I taught it to the team and all the experienced-but-frontend-shy developers started producing great frontend code by themselves.
> Frontend tooling has never been better than it is today.
eh. Swing in its golden age run circle around what we have now. Granted it's old tech now that we settled for in-browser delivery, but still:
- look and feels could do theming you can only dream of with css variables/scss
- serialize and restore gui states partially or whole, including listeners
- value binding systems vue can only dream of
- native scrollers everywhere you could style and listen to without the rendered complaining about passive handlers.
- layout that didn't threw a fit about forced reflows
- unhappy with the standard layouts? build your own one as needed
- debug everything, including the actual render call that put an image where it is
- works the same in all supported os
browsers are an insufferable environment to work within compared to that, css is powerful and all but you get a system you can only work by inference, and were everything interferes with everything else by default, which works great to style a document and is a tragedy in an app with many reusable nested components.
Not to be mean, but I worked with Swing for ten years and it was absolute crap. Constantly dealing with resizing “collapsars”, GBLs, poor visual compatibility with the host OS, a fragile threading model and piles and piles of unfixed bugs and glitches was a nightmare. It might have worked if you had a specific end user environment but it was a PITA for anything else, and deployment was even harder.
There are a few things I definitely miss from my 20 years as a Java dev, but the half assed and under funded Swing UI is not among them.
but it's not like it was that worse than flex bugs ( https://codepen.io/sandrosc/pen/YWQAQO still does render differently in chrome than firefox - which one is wrong is immaterial)
But HTML/CSS/JS aren't anywhere near good enough to build GUIs of any complexity by themselves, so everyone layers tons of stuff on top. And then those ... those, people have plenty of complaints about too. But if they didn't use them those complaints would migrate to the underlying framework.
I mean, Swing may have had a fragile threading model (not sure what you mean by that really), but HTML doesn't have one at all. Not great!
I agree with you 100%, and was really just addressing the “swing is awesome” statement in the GP. I’ll take HTML etc over Swing any day, but I’m sure there are nicer alternatives if your deployment environment is native, e.g. SwiftUI (which I have no experience with)
There are plenty of things wrong with HTML & friends, await/async and webpack being my personal hair removers, but if we set that aside and just talk about the DOM as the API for the UI, it’s very robust, well documented and widely available. I don’t love it, but it works.
HTML was never meant for building apps. We took a square peg (document markup language) and jammed it into a round hole (app development.) Most of the problems and frustrations with web development go back to this.
We've been using the wrong tool for the job for over 2 decades. Now it's everywhere and nobody knows any better. It's probably too late now.
>What I believe, is that the bar to build web applications has been lowered,
Yes, the bar to build web applications has been lowered. We can all build something on the level of GMail now.
The ability to build websites has been crippled, because you are often forced to build the sites using the tools suited to applications. As you and both the parent comment seem to agree on.
Yeah, there is some overuse of SPAs I agree, but havent anyone in this thread worked in older java monoliths with JSP or even good old Struts framework?? THEN you can see what inefficient development looks like.
> Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.
This is a major problem with our industry. Unfortunately, the people with the power to curb this trend have their paycheck depend on it continuing.
As a company, you are incentivised to have a large tech team to appear credible and raise funding, so you hire a CTO and maybe some engineering managers. Their career in turn benefits from managing large amounts of people and solving complex technical problems (even if self-inflicted), so they’ll hire 10x the amount of engineers the task at hands truly requires, organise them in separate teams and build an engineering playground that guarantees their employment and gives them talking points (for conferences or the seemingly-mandatory engineering blog or in interviews for their next role) about how they solve complex problems (self-inflicted, as a side-effect of an extremely complex stack with lots of moving parts). Developers themselves need to constantly keep up to date, so they won’t usually push back on having to use the latest frontend framework, and even if they do, that decision is out of their hands and they’ll just get replaced or not hired to begin with.
In the end, AWS and the cloud providers are laughing all the way to the bank to collect their (already generous) profits, now even more inflated by having their clients use 10x the amount of compute power that the business problem would normally require.
Maybe the issue is the seemingly-infinite amounts of money being invested into tech companies of dubious value, and the solution would be to get back to Earth as to have some financial pressure coming from up top that incentivises using the simplest solution to the problem at hand?
This is the main reason I refuse to entertain going perm in the tech sector. The amount of superfluous infrastructure and unquestioned use of SPA's is just an overwhelming time sink. I would honestly rather work with some 2-bit company's legacy PHP than this mountain of crap.
For what it's worth, once you are proficient in the full end-to-end, navigating it is pretty easy, IMHO.
It just takes years and lots of room to do basically nothing, and if something meaningfully shifts, you need a while to get back up to speed.
I'm not saying it's efficient, or that you should dive in, but I did want to throw out there that there is a light at the end of the tunnel. People using React.js aren't flailing about in the dark the whole time.
Food for thought: the tech sector is much larger than the trendy dumpster fire of web development. You don't have to work at some startup on some website. There is still lots of real programming to be done.
Kubernetes is the biggest joke. I remember working with a sysadmin who worked for The Guardian provisioning servers remotely as demand spiked. This is pre-AWS. He used Puppet and remarked that you would only ever need what he was using for managing massive fleets of servers. Then Kubernetes and Docker arrived, which were intended for even bigger deployments in data centres. Before you knew it, just as with SPA's, Kubernetes and Docker became the new requirements for web devs working on simple apps.
Also never underestimate the power of a single bare-metal server. Today everyone seems to be in the clouds (pun intended) and has seemingly accepted the performance of terrible, underprovisioned VMs as the new normal.
Stackoverflow -- the website that every developer uses probably all the time -- is an example of a site running on a very small number of machines efficiently.
I'd rather have their architecture than 100's of VMs.
For those who are discouraged by the massive complexity of Kubernetes/Terraform and various daunting system design examples of big sites, remember you can scale to a ridiculous levels (barring video or heavy processing apps) with just vertical scaling.
Before you need fancy Instagram scale frameworks, you’ll have other things to worry about like appearing in front of congress for a testimony :-)
This is indeed the standard example I refer to to prove my point, and all my personal projects follow this model whenever possible. The huge advantage in addition to performance is that the entire stack is simple enough to fit in your mind, unlike Kubernetes and its infinite amount of moving parts and failure modes.
I share the general HN sentiment over microservices complexity but just to play devil's advocate...
I suspect that server cost in this case is asymptotic. If the (monetary) cost of SE's architecture is F(n) and your typical K8s cluster is G(n), where n is number of users or requests per second, F(n) < G(n) only for very large values of n. As in very large.
In essence, the devil's advocate point I'm making is that maybe development converges towards microservices because cloud providers make this option cheaper than traditional servers. We would gladly stay with our monoliths otherwise.
I tried to contrive a usage scenario to illustrate this but you know the problem with hypotheticals. And without even a concrete problem domain to theorize on, I can't even ballpark estimate compute requirements. Would love to see someone else's analysis, if anyone can come up with one.
Microservices will add latency because network calls are much slower than in-process calls.
Microservices, as an architectural choice, are most properly chosen to manage complexity - product and organizational - almost by brute force, since you really have to work to violate abstraction boundaries when you only have some kind of RPC to work with. To the degree that they can improve performance, it's by removing confounding factors; one service won't slow down another by competing for limited CPU or database bandwidth if they've got their own stack. If you're paying attention, you'll notice that this is going to cost more, not less, because you're allocating excess capacity to prevent noisy neighbour effects.
Breaking up a monolith into parts which can scale independently can be done in a way that doesn't require a microservice architecture. For example, use some kind of sharding for the data layer (I'm a fan of Vitess), and two scaling groups, one for processing external API requests (your web server layer), and another for asynchronous background job processing (whether it's a job queue or workers pulling from a message queue or possibly both, depends on the type of app), with dynamic allocation of compute when load increases - this is something where k8s autoscale possibly combined with cluster autoscaling shines. This kind of split doesn't do much for product complexity, or giving different teams the ability to release parts of the product on their own schedule, use heterogeneous technology or have the flexibility to choose their own tech stack for their corner of the big picture, etc.
I'm not sure if we're on the same page here. When I said "cloud providers make this option cheaper than traditional servers" I meant it as in the pricing structure/plans of cloud providers. That's why I tried to contrive a scenario to make a better point. Meanwhile your definition of cost seems to center on performance and org overheads a team might incur.
You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you. Something you already pay your provider for. So my DA point now is, is it cheaper to pay them to handle this or is it cheaper to shell out your own and manage manually?
> You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you
I actually wasn't talking about serverless at any point - I understand that term to mostly mean FaaS and don't map it to things like k8s without extra stuff on top, which is closer to where I'd position microservices - a service is a combo of data + compute, not a stateless serverless function. But I agree we're not quite talking about the same things. And unfortunately I don't care enough to figure out how to line it up. :)
Org factors rather than cloud compute costs are why you go microservice rather than monolith was my main point, I think.
I can't recall reading much on how going for 'the cloud' or 'serverless' saved anyone money. On the other hand, I've read my fair share of horror stories about how costs ballooned and going for the old-fashioned server/VPS ended up being much, much cheaper.
The main argument in favor of the 'cloud' is that it's easier to manage (and even that is often questioned).
I haven't looked for a while but Plenty Of Fish (POF) also ran on the same infrastructure and the same framework - ASP.Net. Maybe ASP.Net is particularly suited to this approach?
What about interpreted languages? I was taught a Python web server can do $NUMCPUS+1 concurrent requests and therefore 32 1 CPU VM will perform as well as a 32 CPU VM.
Kubernetes is overkill for most applications that's true, but Docker is awesome because it solves almost all of the "it works on my machine, doesn't work in prod" and "it worked yesterday, doesn't work today" issues and isn't that hard to adopt.
Kubernetes has been great for us and is much easier to manage over time than servers. There’s an adoption cliff, but I’d take kube over spinning your servers with puppet any day.
Hell I might even run kube if I was running bare metal. Declarative workloads are amazing.
> I've always felt this problem from the first time I touched Angular. It was just so much more complex and fragile without actually a lot of benefit unless you wanted to make a really interactive async application like Google Docs or Facebook Chat.
It sounds crazy to say that now but Angular became big because it was actually quite lightweight compared to other JS frameworks of this era, declarative 2 way databinding was cool, it was compatible with jQuery (thus its widget ecosystem) and it was also developed with testing in mind. So it was easy to move jQuery projects to Angular ones, and developers cared about this aspect and it helped organize code quite a bit. Angular 2 on the other hand never made sense and it was a solution looking for problem.
React and JSX came along and allowed developers to use JS classes when a lot of browsers didn't support them. And unidirectional dataflow was all at rage. It was always the right solution of course, but I never heard about DOM diffing before that which to me is the main appeal to React. To this date, HTML API still do not have a native(thus efficient) DOM diffing API which is a shame.
> When SPA's became the norm and even static web pages needed to be build with React, developing became more and more inefficient. I saw whole teams struggling to build simple applications and wasting months of time, while these used to be developed within a couple of weeks with just 1 or 2 developers, on proven server side opinionated frameworks. But that was no longer according to best practices and industry standards. Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.
IMHO the problem isn't React and co or even SPA. In fact writing an REST/Web API should be easier than writing a server-side generated HTML website (no need for templating language, ugly form frameworks,...). The problem is the horrible and complex NodeJS/NPM backed asset compilation pipelines and build tools that these framework often require in a professional setting, which incur a lot of complexity for very little gain.
In fact writing an REST/Web API should be easier than writing a server-side generated HTML website (no need for templating language, ugly form frameworks,...).
Why is that easier? It's more work, you are now rendering two views instead of one, a JSON one(server) and HTML one(in the client) with all the JSON encoding/decoding that it entails. You are still using a templating language and, dealing with forms in React is more cumbersome than doing it server-side.
separation of concerns, easier testability, easier mocking..., the thing is especially in more complex applications the code that generates/validates the data and the code that displays them are usually written by two different people.
nowadays once the json schema design is settled, they can work in parallel, each of them can test their parts without needing the other and the merges can be simpler, because the parts do work more or less stand-alone.
> React and JSX came along and allowed developers to use JS classes
Nitpick, but I doubt that was the reason developers are flocking to React back then. In the beginning browsers didn't support JavaScript classes and neither did React. You fake them by using a function known as React.createClass instead. There was also no transpilation required, as JSX was optional. In fact React was always about unidirectional data flow, and reasoning about state -> DOM elements rather than reasoning about changes to the DOM.
I don't think it's developers tbh. Or rather, it's another set of perverse incentives in the industry.
To get a job, devs need experience in relevant tech. No company is willing to train their devs - they all have to hit the ground running. So devs have to have demonstrable experience in the tech that lots of companies use. Companies need to hire devs, and don't really care what tech is used. But using what everyone else uses makes their hiring easier because it's easier to find devs who want to work on that tech. So they advertise for devs with experience in a hot tech. The devs see this and try and move their internal projects to use the hot tech so that if/when they look for their next job they'll have experirence in it.
The devs are just trying to stay relevant in a rapidly changing tech scene so they can get their next job.
The companies who employ them don't care what tech is used, but find recruiting devs to be easier if they're working in the latest hot tech.
The key point that could change all this is if companies were willing to train their devs in the tech stack they're using.
I also wonder how much of the SPA trend by mega corps was about shifting compute “client side” to save money on infrastructure. It’s kinda like modern data warehouses where storage/compute is now so cheap you do ELT and not ETL anymore. I probably wouldn’t do an SPA today unless I really had to.
The problem that nobody knows whether something will become the next Google Docs. Transitioning to an SPA from something like jQuery is basically a complete rewrite.
To be willing to not use an SPA, you need to be willing to exclude certain options from day 1. Find me a product manager willing to do that.
> nobody knows whether something will become the next Google Docs
How many times has it actually happened that some scrappy startup has 1) became the next big thing and 2) not being at the edge of over-engineering actually killed it or significantly impacted its revenue? This just feels like wishful thinking.
Also keep in mind that even if you were on track to become the next Google Docs, this means your current product is usually good enough as-is and gives you time (and $$$) to improve it.
I'm not sure that using React or another JS framework counts as 'being at the edge of over-engineering'.
I agree with the rest of your point - the value of the product to end-users has little to no correlation with the underlying technology choices, which is a pretty controversial statement, but one that I think is true. A customer doesn't care if you built it in React, in one Perl file, or if you're sacrificing goats to retain the minimum requisite levels of dark magic to keep the system running. If it solves their problem they'll keep giving you money for goats.
It depends on what the objective is. I've seen plenty of project where React was used just to have it as a buzzword, but otherwise provided no functionality and actually slowed development down and ended up being less reliable (we had to - poorly - reimplement behaviors like validation, pagination, etc that our backend framework already had for free).
Another view of this problem, evolving a SPA of CRUD app into Google Docs may also be a complete rewrite.
IMO when the time comes and your product is used well, you may be ready financially and technically to do a complete rewrite. Otherwise maybe the current functioning application is better if the rewrite isn't justified.
React isn't so bad. It's fairly straightforward and the components are contained within the page. And it's more of a library than a framework. The core is small and easy to learn
Angular is a giant confusing pile of magic. It's so complex you've gotta be a core developer to even understand how an app comes together. Stay the hell away if you can
I think Phoenix Live View is maybe the most compelling story around this (https://github.com/phoenixframework/phoenix_live_view). I'm moving a side-project from React/SPA to Phoenix live view and it's kind of amazing to get the dev ergonomics of a server-rendered page with the UX benefits of a SPA.
I've been playing with LiveView myself for personal projects and it is very nice.
One interesting thing is that it makes you consider memory issues again since all your rendered data structures are held in memory in the LiveView process unless they are explicitly cleared after the render with "temporary_assigns". For apps that would have to hold and transfer a lot of data up and down the channel I've ended up using a hybrid of LiveView with divs set to phx-update="ignore" in which I mount React components.
I'd say that's the main (potential) 'problem' with LiveView, alongside latency issues.
In practice, I've usually found that the advantages outweigh the disadvantages when it comes to the former. Beefing up my servers seems like a worthwhile trade-off.
When it comes to latency, or related issues, I find that I'm still so much better off using LV as a basis and 'dropping down' into plain JS or (p)react when I need it.
We recently delivered a project that involved a whole bunch of stuff that LV was a perfect solution for, but also a core bit of functionality that required various kinds of animations. We ended up using LV wherever we could, and piggy-backed on the LV channel (websockets) to handle synchronizing the animation stuff. The actual code to animate things was just plain old JS. Worked like a charm!
I'm still using LiveView in a lot of the other parts of the app - places like signup/signin where the having the form validation seamlessly done server side using changsets is very, very nice. I'm also using in places where I would normally have to expose an api endpoint for CRUD and instead I'm using events and handling it in the LiveView. Again, very nice.
Agree completely. My only concern is that so many imitations are attempting to pop up in other languages and you just can’t do it as effectively.
There are so many tradeoffs present that happen to result in the necessary set of functionality to do this efficiently that aren’t easily present outside of the BEAM.
Everything can be done in everything else. The only question is how well it fits, how contorted does it have to be, what comes naturally.
With BEAM, robustness is a special feature. In the BEAM you can kill, restart and replace processes all over the place and everything stays working pretty well, because its structure means everything written for it is designed with that in mind.
In a typical async/await server side application sharing state across clients, killing, restarting and replacing usually means the whole single process containing all the async/await coroutines, and the fancy per-client state you were maintaining is lost.
You can of course serialise state, as well as coordinating it among multiple processes, but that takes more effort than just using async/await in a web app, and often people don't bother. Doing that right can get tricky and it requires more testing for something that doesn't happen often compared with normal requests. So instead, they let the occasional client session see a glitch or need reloading, and deem that no big deal.
It can be done, but you’ll see a lot of weak points. It’s probably worth a blog post to explain it. There are several layers in the language that combine to make this work.
At a high level, is the combination of process isolation, memory isolation, template delivery, websocket capability and resilience on top of all the standard web bits.
It will be really difficult to pull off with a good developer experience and minus several deficiencies outside of the BEAM. Anything’s possible though.
It's still not a silver bullet. There's lots of things in a LV driven app where you're still wondering what front-end library to use with it.
Typically you wouldn't use LV to handle:
Menu dropdowns, tooltips, popovers, modals (in some cases), tabs (when you don't want to load the content from the server) or things that change the client side state of something but don't adjust server state. That could be things like a "select all" checkbox toggle where that doesn't do anything on its own other than select or de-select client side check boxes or toggling the visibility of something. There's also things like wanting to copy to the clipboard or initiating stuff to happen on drag / drop (like animations).
Basically you'll still find yourself wanting to use JS with LV. Whether that's Stimulus, Alpine, Vue, jQuery, vanilla JS or something else that's up to you. But I do find most of the above necessary in a lot of web apps I develop.
> It's still not a silver bullet. There's lots of things in a LV driven app where you're still wondering what front-end library to use with it.
> Typically you wouldn't use LV to handle:
> Menu drop dropdowns, tooltips, popovers, modals (in some cases), tabs (when you don't want to load the content from the server) or things that change the client side state of something but don't adjust server state.
My experience is that LiveView is fine for all but the last use case. And while in practice I often don't really need to keep things client-side-only, when I do it's often pretty easy to just write a bit of js and, if necessary, use hooks and events to communicate with the surrounding LiveView(s).
In fact, I vaguely recall that in the early days of LV, the creator himself argued that it should be used for just 'smaller interactive stuff'. Over time, we all discovered that LV does surprisingly well for SPA-type use cases (and as a result we now have stuff like router-level LiveViews (that take over the whole page), live_redirects, url updating, and so on).
> That could be things like a "select all" checkbox toggle where that doesn't do anything on its own other than select or de-select client side check boxes.
Why wouldn't you just keep that within the server-side state LV paradigm? I've done just that in a project I'm working on.
> There's also things like wanting to copy to the clipboard or initiating stuff to happen on drag / drop (like animations).
For those things you write js, yes.
> Basically you'll still find yourself wanting to use JS with LV. Whether that's Stimulus, Alpine, jQuery, vanilla JS or something else that's up to you. But I do find most of the above necessary in a lot of web apps I develop.
Absolutely, but I'm continuously surprised how little of it I need, and how often I /think/ I do just because I haven't quite wrapped my head around the different paradigm.
> My experience is that LiveView is fine for all but the last use case.
I wouldn't want to impose a 50-500ms+ delay on someone to show a menu drop down or a tooltip or most of the other things listed out.
With LV everything involves a server round trip. That's great for when you need to make a round trip no matter what (which is often the case, such as updating your database based on a user interaction), but it creates for very unnaturally sluggish feeling UIs when you use LV for things that you expect to be instant.
Even a 100ms delay on a menu feels off and with a good internet connection if you have a server in NY, you'll get 80-100ms ping times to the west coast of the US or the west coast of Europe.
LV feels amazing on localhost but the internet is global. I still think it's worth minimizing round trips to the server when you can, not because Phoenix and LV can't handle it but because I want my users to have a good experience using the sites I develop.
> My experience is that LiveView is fine for all but the last use case.
>> I wouldn't want to impose a 50-500ms+ delay on someone to show a menu drop down or a tooltip or most of the other things listed out.
It's basically a UX standard that a tooltip only shows up after a few seconds, so that strikes me as a particularly bad example. That said, sure, if instant tooltips are important, a tiny bit of js and a specific class name in your markup would solve it.
> With LV everything involves a server round trip. That's great for when you need to make a round trip no matter what (which is often the case, such as updating your database based on a user interaction), but it creates for very unnaturally sluggish feeling UIs when you use LV for things that you expect to be instant.
Yeah, I do agree on that. While I feel using a tooltip is a bad example, in practice I wouldn't implement tooltips or menus in LiveView. Those would just be solved via some CSS trick or some plain old JavaScript.
> LV feels amazing on localhost but the internet is global. I still think it's worth minimizing round trips to the server when you can, not because Phoenix and LV can't handle it but because I want my users to have a good experience using the sites I develop.
I'll give you that generally I wouldn't use LV for tooltips and popups. But in part because those are really easy to solve without it.
But for /so/ much of the stuff involved in a SPA the latency has not been a problem in practice.
Consider tabbed content. Sure, I could make it all 'instant' by preloading the various bits of content and writing js to switch between these bits. But I can avoid that entirely by preloading those bits in my templates and using LV to switch/update classes. The tiny latency downside is worth the upsides: being able to update the content in those tabs live with no extra code (no API calls, no client-side frameworks, and server-side rendered as a nice bonus!).
My general approach is that I use LV as a default, and then use the 'Hook' system and some custom JS when latency is a concern. In practice that doesn't amount to much. So it's not a silver bullet, but it simplifies so much of what a typical SPA does.
The click would be sent as an event to the server, where the state is changed (setting "active_tab" or something like that). Then the view would be re-rendered (probably only changing a few class names) and the diff sent back down to the client.
Gosh, that feels so inefficient (as a js dev here). Then again... React had it's naysayers for sometime because 1. JSX, 2. nobody saw dom diffing as truly fast enough. But dom diffing is absolutely faster than asking the server to update a classname.
True, in this particular case it does seem inefficient. And of course there's nothing stopping one from just doing this with a bit of js.
But in the bigger picture, the advantages of this approach are huge:
1. no need to maintain state, routing, and so on on the front- and backend, which removes a huge source of complexity. It's all in one place. And if something in the DB is updated, it's trivial to make it live-update the client state. And because of websockets, such an update is almost instant.
2. being able to use the same language (and templating) on server and client (for the most part).
3. the ability to just use regular function calls to retrieve data, and selectively display what you want by using it in templates. No need to set up endpoints for the client, and no need to worry that perhaps accidentally the endpoint might send data down the wire that shouldn't be there (and that you might not notice because the JSX doesn't display it). I think in just the past year I've read about a number of serious data leaks that were basically a result of this.
4. no need (or not as much need) to keep an eye on the js payload. Want to format dates in particular way? Just add the dependency and use it however you like. It's only diff in the output that gets sent to the client!
5. little to no need to deal with a complicated build process.
6. server-side rendering out of the box, and in a simple manner!
7. less taxing on the client. No need for processing templates and a lot of code. Of course, the downside is that the server has to do more work.
Now obviously latency can be a downside, as is (potentially) increased memory and processor usage on the server. It's not a magic solution to everything :). Hell, my last project still needed quite a bit of javascript for some heavy interactivity where latency had to be avoided. But it's still astounding to me how many projects have become drastically simpler with the LiveView-approach!
We’ve gone all in and are using it a lot in production. There is one big trade off to consider though: point of presence and global availability. We’re ok, as we’re a UK based website and hosted in GCP europe-west2, but we had devs in NZ for a while, and they said that, understandably, the site was really slow for them due to the latency. Beam has the ability to link instances and have them cluster, so you could do something like that across multiple regions globally, but there is a trade off to consider if you run a global service: you’re trading an over engineering and sync problem, for a potential distributed systems problem if you try and make the same liveview site available performantly in multiple global regions.
as a user, this is the kind of experience I want on the web. any spa like page should always degrade back to web standards. and it should be lightening fast. and not spin my cpu fan or warm my desk.
I started this course and the animations/presentation are amazing. some really clear explanations. But the fake cute-sy dialogue and the extra fluff that says how wonderful and amazing and great and FUN liveview is means I didn't get past the first video
I covered the whole process of building the site for my podcast https://reactor.am in the series.
Note that LiveView still isn't at v1.0 and breaking updates have been common. You'll have a much better with my LV tutorial or anyone else's if you use the exact same library versions we do and upgrade at the same point the tutorial does.
Having deployed liveview for an admin dashboard, I gotta say, it really is great FUN, and no-fuss, even if you're slinging together a system that customers never see so you don't care if the code gets a bit knotty, and your datastructures are abjectly awful.
It's not suitable for using the client's computer to mine crypto currencies, doing graphics processing on the client, doing numerical processing on the client or doing anything purely on the client without server interaction.
For things where you generally need a trip to the server anyway, like validations, it's great.
Yup, exactly. If you’re mostly building a client application (or a p2p client-server application!) but it happens to live on the web platform, PLV does not seem like a great fit...at least not this year. Honestly BEAM seems great it ought to be great for that generally!
Maybe if you ran the server portion of PLV in the browser...but then you’re just back at React anyway I suppose.
IMO, TurboLinks + service workers are the way to go.
Not many people know this, but a service worker (previously called "local server") allows you to run a little web server in the user's browser that intercepts requests to your own web site. (There's no open IP port.)
The service-worker web server can proxy requests to the remote server, and even build/store entire pages on the client side, enabling offline support. Service workers also have access to a local database, IndexedDB, running in the user's browser.
You can build a very fast web experience this way. You can easily cache individual pieces of a web page and glue them all together.
Strongly agreed that service workers have a ton of potential here. But I’m still waiting for there to be some kind of killer framework that crosses the bridge between worker and window, saving lots of main thread processing, etc... MessageChannels are quite low level so I imagine it would need some abstraction. But still, very powerful. I’m imagining some kind of Svelte-like thing that creates the whole page worker-side, then generates minimal window-side JS to hydrate the components that’ll actually change. Of course I’d make it myself, but... oh, look over there...
(IMO they should still be called local servers, or server workers perhaps. Service worker is too vague)
I'm working on something like this. Not really a framework though. More of a hodgepodge of JS that I've written and the front end logic would be something like HTMX/Behavior.js style of coding. It will be a progressively enhanced approach to writing a SPA so you could have no JS and it would still work! Or, if you have a modern browser it would work offline. We'll see if I ever finish it :-)
I'm doing a bit of a rewrite right now so this flexibility will come soon.
Is ESI in Service Workers anything you’ve tried? A bit mind blown, never thought of that. Been putting of SW due to all horror stories of people bricking their sites basically
Is this the same model meteor.js uses? I seem to recall their system maintains a mini-database on the client side and syncs it with the main server through a pub-sub model.
Yes. At least it can use service workers.
Meteor uses MongoDB on the backend, and minimongo client side. Those two are synced over their DDP protocol IIRC.
I miss meteor. It was such a great framework and promise. Not for big sites really, but for mock-ups, internal sites ect.
A while ago I started to look at it again. The drivers behind the project was essentially asking for input on what was preventing people from using the framework.
The current state of the project is/was OK, except for the fact that it was hold down by all the guides and howtos referring to previous versions. It was not well documented what current best practices to follow, what to use as replacements for deprecated dependencies ect.
Author seems to think the goal of SPAs was to simplify web dev, but it’s actually to allow you to build fully featured, highly interactive, apps in a browser.
What the author is really getting at, I would guess, is that front end dev is awful, due to this weird combination of the Blub issue and a historical trajectory that has caused many problems.
The blub issues is mostly simple enough to pin down. Experienced programmers know that JS is an awful language. But there’s also a tooling or SDK “blub problem”. For example, compare npm and webpack vs gradle and javac (not to defend the Java ecosystem, but it does get some things right, or more right than others).
More idiosyncratically, there’s this historical arc of encountering fundamental problems, and trying to solve them within the current constraints of the web, rather than perhaps waiting for the web to standardise and evolve. This seems to be a mixture of lack of experience outside of this ecosystem (a bit like Blub) and, for this and other reasons, fixing problems “in your app” that should be fixed in the fundamental infrastructure of the web. It feels like technical solutions to social problems or, to use another metaphor, we are patching downstream what should be fixed upstream... if you build a house on sand, it will never be robust, no matter how many layers of infrastructure you add. That’s where the complexity arises.
It’s an enlightening exercise to step back and ask how you would build SPA infrastructure if starting anew. You certainly wouldn’t use a language like JS, you certainly would want to provide visual design tools as far as possible, APIs would be replaced with standard protocols, and probably you’d use a relatively small XML for layout. So perhaps only the HTML is anything like what you’d use. There’d be no transpilation, no webpack, no polyfills, no CSS, no JS.
In fact, what you’d end up with would look remarkably similar to the dev process for a Java applet!
I disagree with this. You might not want to use JS but a language “like” JS such as TS or Lua would definitely be on the table. Or just JS without the biggest warts.
> you certainly would want to provide visual design tools as far as possible,
I feel that the promise of visual design tools fell quite short. Issues with version control and general traceability of changes and the ultimate non feature parity with code make me think that code first interfaces are the future.
> APIs would be replaced with standard protocols,
Could you elaborate what do you mean? I assume REST apis but that is basically just HTTP.
> and probably you’d use a relatively small XML for layout.
As long as the language is robust enough to not move stuff around when the UI slightly changes. I feel that XML (and to an extent html) is too lax to express a programmatically created interface.
> So perhaps only the HTML is anything like what you’d use. There’d be no transpilation, no webpack, no polyfills, no CSS, no JS.
In a different world you could replace these steps by compilation, compiler, backwards compatibility libraries, a styling framework and a language of choice.
I think you have shown that JS ecosystem has grown very organically. I think this is because the nature of web developers was to put stuff out rather than really think about how do make it the correct way. I believe this is because of constraints, on a native platform you had the option to go down to assembly or create a new language or paradigm. On web only the browser vendor has this power, all the dev had was JavaScript.
> You might not want to use JS but a language “like” JS such as TS or Lua would definitely be on the table. Or just JS without the biggest warts.
When you remove the warts from JS there's not much left. And I'd be pretty skeptical of someone starting a new project in Lua today. I think the mainstream choice for a "blank slate" language today would look something like Swift or Kotlin; Typescript can gets close, but it still has a lot of JavaScript baggage you'd want to strip out.
> I feel that the promise of visual design tools fell quite short. Issues with version control and general traceability of changes and the ultimate non feature parity with code make me think that code first interfaces are the future.
All the big UI libraries end up offering some kind of markup/constraint-based interface - which is ultimately data rather than code. And for editing that, a visual form designer makes a lot of sense. I like Qt's approach - you visually edit a markup form that's compiled into a class you can subclass, so you don't have to deal with the problems of code generation, and the markup is relatively version-control-friendly.
> Could you elaborate what do you mean? I assume REST apis but that is basically just HTTP.
I'm not the person you replied to, but thrift/gRPC are a lot nicer to work with than REST APIs. Standardised protocol definitions that let you understand what kind of changes are or aren't forward/backward compatible, and no need to write a bunch of boilerplate by hand.
> I think you have shown that JS ecosystem has grown very organically. I think this is because the nature of web developers was to put stuff out rather than really think about how do make it the correct way. I believe this is because of constraints, on a native platform you had the option to go down to assembly or create a new language or paradigm. On web only the browser vendor has this power, all the dev had was JavaScript.
It's the same story as "no-code" tools: IT departments won't let anyone install an application runtime, but they're happy to install a "document browser" and let it run arbitrary code. It's understandable, but depressing.
When the bar are languages like F#, Haskell, Elm, Reason... How can JS be considered good? All languages have warts, but JS is a language full of them, ambiguity is the name of the game, mutating everything is encouraged and global mutable state is everywhere.
Very little has changed for JS outside of syntax, the core is still rotten.
> Modern JavaScript is pretty sweet to write compared to pre 2015.
That's damning with faint praise if I ever heard it. JavaScript has more or less caught up with the lowest common denominator of other languages. But I've yet to hear anyone make a good case for actively choosing to use it.
I'm a long-time Javascript hater. I recently did some vanilla ES6, now that browser support is finally at a point where you don't need to transpire to ES5 - I admit, ES6 is much nicer than things used to be, and not having to transpile is wonderful.
But realistically you still want a JS build pipeline of some kind, for minification, compiling SASS to CSS, and other things. And of course if you're doing "modern" frontend work, you're not using vanilla JS like I was - you're using a complicated framework like Angular, React etc, and having to deal with the likes of webpack.
And then there's the anaemic standard library - honestly, barely any better than it was 10 years ago. And then, largely as a result, you've got NPM-dependency hell, with hundreds or thousands of deps for just about anything. Want to trim a string - there's a package for that.
And then there are other languages and ecosystems - when you compare JavaScript to those... well, then you really have to admit it's a turd.
Hating on JavaScript isn't a bandwagon - there are many genuine reasons why people dislike it and the ecosystem. IMO, comparing it to a "circle jerk" is like that meme where some character is surrounded by fire saying "this is fine".
> And I'd be pretty skeptical of someone starting a new project in Lua today.
Well, there is the upcoming play.date SDK for example and a lot of jam games.
The biggest reason why these languages are used is because they are fun.
Don’t get me wrong, I really like swift and SwiftUI. But there is something quite liberating when throwing away the type system completely and just hacking at the code. This brings in new people who get motivated by seeing a thing shaping up rather than staring at weird error codes.
> but thrift/gRPC are a lot nicer to work with than REST APIs.
I agree with this, protobufs are nicer to push around rather than JSON. Ultimately though one can do pretty arbitrary stuff with either.
For your last point I think that is a separate issue. The ubiquity of web is certainly reason for its popularity. But I was mostly talking about why people hacked around issues with upstream rather than trying to find more sound solutions.
> In fact, what you’d end up with would look remarkably similar to the dev process for a Java applet!
Yeah, it's increasingly clear to me that Java was just 20 years ahead of its time. Java really would make a great front end language.
People lament the complexity and size of the JVM... But these days V8 is just as bad. The complexity is a trade-off for runtime performance.
It's compiled into a compact easy to parse bitcode format similar to WASM. It's faster than JS and shares many basic design decisions. The packaging system is and always was basically a better version of NPM. It has pretty good cross-platform UI, probably the best there is outside of QT.
WASM apps are eventually going to be built identically to Java cross-platform desktop apps. Probably using Java, Go, C#
I hope not. The world is finally waking up to the need for sum types. A big part of the reason Java is so hated for doing simple things in is that lacking sum types meant it had to implement a horrible "checked exception" system.
Hoo boy it was awesome to work with. But. It came out at the wrong time. Linux was coming up as a desktop, MS was still evil. Mono was mostly a hobby project.
The backlash of trying to use "proprietary M$ crap" for web was too big of a hurdle to cross, even if the technology behind Silverlight was lightyears ahead of the drek that was Java Applets.
I feel like the mistake Java made was ceding the DOM to Javascript. It turned out that users like the browser and didn't particularly want either native widgets or a new system. The browser is familiar and good enough for a vast majority of tasks.
If the JVM had access to the DOM, you could write SPAs in Java and everybody would be happy. Instead Java sealed itself off separately from the browser, and Javascript went from little toys to full-blown UI applications. And then developers wanted to use the same code on both client and server so they made Node, working in the space where Java is so much clearly better.
So we end up with the worse language running the world. Fortunately, JS has finally become a mediocre language (or even a decent one with TS), and here we are.
I think it was just that shipping native apps on a web page was too slow back then for the internet and the machines. For the web to grow, it had to be simple. It took almost 20 years before we looped back to running full applications in the browser
I don't know any polyglot programmers who would consider javascript better than at least one of the other languages they use, and would ditch js for those if they had the option.
Note how I didn't say that many polyglots may think it's the best language, just that it's great as opposed to awful.
I know as a fact from watching tech conferences and talking to people that some elite-level polyglot programmers make it their main working language out of pure choice. Surely one can't assume that everyone doing bleeding-edge Javascript at places like Google is programming every day in absolute misery, or that it's the only thing that they know.
Me personally, I've possibly enjoyed programming in C, OCaml, Swift or Python more at times, but I think ES6+ Javascript is great and I'm happy to use it.
Javascript appears as the 11th most loved language in the SO dev surveys, with 66% of devs working with it reporting to love it, while it appears quite far down in the dreaded list. Typescript indeed fares much better. [1]
Of course, there is the question of whether respondents are True Programmers™ or posers, but I think that's another debate.
> Note how I didn't say that many polyglots may think it's the best language, just that it's great as opposed to awful.
Fair enough :). I think 'great' is a bit of a vague concept, but I definitely would agree that it's far from awful.
I remember when I wrote a lot of JS back in the day ("Javascript: The Good Parts"), I did feel that stripped from the bad parts, I often preferred the elegance of the basic good parts over, say, Ruby and its "blocks, procs and lambda's" (for example).
You said, "I don't know any polyglot programmers who would consider javascript better than at least one of the other languages they use, and would [not] ditch js for those if they had the option."
My impression is that the design of Java the language, plus its runtime, are appreciated even by the harshest critics.
However the culture around complex frameworks and over-engineering is what most people really dislike about it.
IMO pretty much all the advantages touted by Java proponents (such as: good language design, easy of use by heterogenous teams, speed, etc) are correct, but are negated by a large part of the culture and ecosystem. The memes about humongous class names and 200-method stack traces are true when you use the popular frameworks and techniques.
Of course there are exceptions to this and this can creep into other languages too, of course.
> However the culture around complex frameworks and over-engineering is what most people really dislike about it.
I think this is it. I remember back in the day being absolutely floored when I started learning J2EE by the, it seemed, unnecessary complexity (for most use cases) of EJB. It was incredibly offputting: if you were starting a project from scratch it felt like you had to do a ridiculous amount of work just to get to "hello world". I'm sure it wasn't that bad but the memory has slightly scarred me.
I haven't worked in Java for ages, mostly working with .NET for the last 16 years and, unfortunately, the same problem has to some extent bled into the .NET ecosystem too.
A few years ago I contracted at a place where the "micro"-service team I was assigned to had this codebase where they'd clearly taken the OSI 7 layer reference model to heart and applied it to a domain where customer details were collected and recorded. I've nothing against layered architectures, and have made use of them many times in appropriate circumstances, but this was awful: one of the most needlessly complex codebases I've ever worked with, and incredibly discouraging to work on because it was so hard to actually achieve anything. There were fully three or four layers in the middle that did nothing but call the next layer down. The quantity of boilerplate was extraordinary. To add one method that did anything of substance you'd actually have to add between five and seven methods, most of which did nothing but call the next layer. Ridiculous.
Still, that doesn't change the fact that the .NET languages, runtime, and base framework are excellent, and that sadly being excellent is no antidote to misuse. Same applies to Java.
That's true. I also used to be a .NET guy in the past, but I started doing more games (and then frontend) when the movement from Rails-ish to Java-ish MVC started.
The thing about the multiple "layers" that don't do anything really bothers me too, because they are a misconception of how those complex architectures (Clean/Hexagonal/Onion) really work...
Instead of having mandatory layers, those should be pluggable. Just having a layer calling the next one is unnecessary, and some people implement it by having the next layer as a transitive dependency, which makes testing harder and has zero benefits!
> The thing about the multiple "layers" that don't do anything really bothers me too, because they are a misconception of how those complex architectures (Clean/Hexagonal/Onion) really work...
> Instead of having mandatory layers...
C# guy here.
I don't think things were ever as bad in the dotnet world as they are in Java, but l do still come across a lot of what you're describing here. Thankfully though, a lot of devs do seem to have "awakened" - it feels like there is a lot less cargo-culting of "best practises" such as layers, interfaces and abstract classes for everything, tests so full of mocks you can't see anything being tested etc.
C# is a fantastic language, but as with any OO language there are lots of abstraction-related traps to fall into.
For me, Java’s ties with Oracle and the nightmare stories about complicated `MetaAbstractBaseClassFactoryClassFactory` are why I seek alternatives, or would be dismissive.
Just noting that the abstraction stuff is mostly a consequence of the CORBA-derived, over-engineered "Enterprise Java" space, and provided you stay out of that tar-pit, and choose your libraries/dependencies wisely, Java is really nice to work with.
Even if you need to implement some kind of "Enterprise Java" app, you can do so with much better libraries and tools than back then, that do not suffer from the excessive abstraction problem.
My first programming job was a pilot study for porting a platform from old and busted CORBA to the new hotness, J2EE. It was embarrassing how much worse than CORBA J2EE was.
I still see factories on a daily basis. They are a useful design pattern that is utilized in Java.
My anecdotal evidence is that I have never seen the over-engineered "Enterprise Java horrors" OP is talking about despite working in the Java EE (now Jakarta EE) space.
I suspect it's a story from the times of J2EE, or something similar.
> I still see factories on a daily basis. They are a useful design pattern that is utilized in Java.
A separate factory type means you have to write twice as much code for no real benefit. In most languages you'd just use a first-class function (and in post-8 Java you can do the same: rather than a FooFactory you accept a Supplier<Foo> and people can pass ::Foo . It's still more cumbersome than in most languages though). Or, in a lot of other cases, the factory is just a clunky way to achieve named arguments.
> My anecdotal evidence is that I have never seen the over-engineered "Enterprise Java horrors" OP is talking about despite working in the Java EE (now Jakarta EE) space.
Have you worked on a reputable codebase in a low-overhead language like Python or Ruby? If you don't recognise factories as bloat then you may well miss the other cases (famously, the majority of the Gang of Four patterns can just be replaced by passing a function).
> A separate factory type means you have to write twice as much code for no real benefit.
Ah! There's the confusion. What I meant was I see factory methods in code we consume on a daily basis, not that we write the full factory objects. A number of Java projects have static factory methods that provide the interface implementation instance based on your configuration.
If you're actually using that configurability (i.e. your method actually instantiates different implementations in different cases) then no - that's the same thing you'd do in any language. If you're pre-emptively defining factory methods that actually just call the normal constructor then yes (a lot of Java bloat is like that - see also getters and setters on every field for the sake of the 0.01% where you actually want to do something other than just read/write the field).
It's not Javas fault it got bought by Oracle (and I don't agree that is is bad, as Oracle advanced it quite).
And I do like 'MetaAbstractBaseClassFactoryClassFactory' type names because that allows me in an application with hundreds if not thousands of classes to find the class I'm looking for very fast by just typing a few keywords into my IDE.
> Out of curiosity, as a Java enthusiast, I was wondering if you could give examples of what you feel is wrong?
I think there's a lot of criticism for Java's language design. It involves an awful lot of boiler plate, and is generally very verbose. It's also a language that forces you to use OO, and OO has received a lot of pushback over recent years - so that approach has become very unpopular.
Personally, I also think the use of so many design patterns is an attempt to compensate for what the language lacks, its reflection capabilities are flawed etc.
I don't hate on Java. For many years it was my main language. It has awesome tooling and the JVM is incredible. But I do agree with most of the criticism.
I've been learning Clojure recently and Rich Hickey's talks often begin with some motivation including criticism of Java and OO more generally, here's one such video: https://www.youtube.com/watch?v=VSdnJDO-xdg
> In my HN browsing I find Java is rarely actually discussed here, though often dismissed. I don't know why.
Personally, I feel like Java is not really as hated as some people make it out to be. It's a stable language that very few people choose for their "cool side project". At the same time, it has excellent tooling, mature and prod-ready open source frameworks, and backing of some giant companies.
This is not to say that Java is perfect. I think most dissatisfaction comes from students or junior people who are baffled by the complexity of Maven/Gradle configurations, strict project structure (where a class can be 10 directories deep), and Java's insistence on boilerplate (which is often challenged by new Java releases. Those, however, are quite rare in production; I'm starting to see Java 11 here and there but the majority of projects I've seen run 1.8).
The basic idea being that you know a tool that's obviously superior to a lot of alternatives, but you're blind to how different alternatives are superior to it because you can't grok the power beyond their initial "weirdness".
My understanding is that everyone understands the features and ecosystem benefits of languages they work with, but not necessarily understands those of other languages.
As a result, the value of their opinions on other languages may be mixed.
>Author seems to think the goal of SPAs was to simplify web dev, but it’s actually to allow you to build fully featured, highly interactive, apps in a browser.
Typical hipster web dev now days! Go back to the basic.
One entry in this space that doesn't get a lot of attention is ASP.NET Blazor [1]. Blazor gives you the option of writing views in C# that will actually compile to WebAssembly and run in the browser, or run on the server and send DOM updates over a SignalR connection, a lot like LiveView.
Yup. Been using it for a very complex data app. Its awesome.
You can choose Wasm for LOB apps or on fast connections.
You can choose Server hosting for apps that have about 100,000 users at a time. Mind you, that is 100,000 users concurrently. Which should be more than enough for most apps.
There are plans to reduce bundle size and server side resources, coming in .NET 5.
As per Microsoft Benchmarks[1], it should cost about 100 USD per month (3 years, reserved instances, paid upfront) to handle 20,000 concurrent users. (This price is not including database, storage, etc). That comes to about 0.18 USD per user, for 3 years. (cost of serving app to 1 user for 3 years in total). Which seems pretty reasonable, especially if you have a decent, paid app. The cost comes down even more, if only a %age of your users are online concurrently.
On Digital Ocean, a similar machine, would cost about 40 USD a month. For 3 years, total cost for 1 users will be 0.072 USD
Blazor is certainly a very interesting and promising piece of technology. However, Blazor Server hosting mode is prone to latency issues and requires always-on connection to run an applications (so, no offline mode). On the other hand, the alternative Blazor WebAssembly mode requires clients to download a sizeable mix of .NET runtime and other system DLLs on the first use of the application (even a lightweight demo application with almost zero app-specific resources requires a download of 6+ MB of data in DEBUG mode and 2+ MB in RELEASE mode). Of course, relevant Microsoft teams work hard on further minimizing the size of the system bundle, but there are obvious limits to efforts in this regard.
Is offline mode widely used? I remember it being released to great excitement and then I never heard about it again. I assumed it died out when being offline became too much of an edge-case for your average user to be worth dealing with.
I think that "under 1 MB" represents the size of .NET runtime proper. However, additional required system DLLs increase the download size of a minimal application to the numbers I cited above [1]. Anyway, your example of Amazon's front page is interesting and is a good point (even though, in my quick test out of curiosity, relevant download size resulted in 2.2 MB [not authenticated] and 9.2 MB [authenticated] - quite a bit less, but still ...).
Remember, that 2+ MB is the download size of a bare-bones application. Relevant sizes for real-world applications, obviously, would be bigger (though, depending on the total size, the Blazor part might or might not be essential).
Download sizes are definitely a problem but they’re not really a Blazor-specific issue these days. Also I would be curious to learn about the relative density of JavaScript bloat vs Blazor download sizes because the .NET class library is much more feature rich before adding dependencies and I could imagine bundle sizes actually being smaller for similar functionality above a certain point. But you’re right that it’s probably never going to get as small as a pure HTML + CSS app with progressive enhancement and minimal JavaScript.
It doesn't seem like there's any major difference so it's interesting that latency is perceived as being a reason not to use blazor server in production but not as such with liveview.
The audience for Blazor Server is probably an order of magnitude larger than the audience for LiveView, and it's mostly not the Hacker News crowd that needs to be won over, but the large enterprise with half a million lines of Web Forms that has grown into an unmaintainable behemoth.
And best of all, both Server and WebAssembly has a lot in common, so it is possible to make that movment as needed. (I often prototype in server, as it's a lot easier to debug, too!)
If you are using Blazor Server, you don't have to even think about REST APIs. Depending on your app, that might be good or bad. But if you primarily target browsers on Desktop, you can tremendously improve productivity, because you can directly use C# POCO Models for UI and backend data services.
It's not. You can literally have a simple "hello world" app in a handful of lines or code if you want.
I have to wonder if you actually use ASP.NET Core yourself, because ASP.NET Core 3 is, IMO, fantastic. ASP.NET used to be a bit clunky, and lacking in extensibility points, and a lot of people used alternatives like Nancy instead. Nancy officially stopped development around a year back, largely because dotnet devs just don't need it anymore. I was a long-time Nancy fan myself, ASP.NET Core 3 has all the best bits and more.
And you don't need to stick with the typical paradigm on controllers in one folder, views in another etc - feature folders work great. Hell, you don't even really need to use controllers, if Razor Pages are your thing.
it smokes Spring in benchmarks, often by a factor of 10x or more, and on plaintext it's 50x faster than spring and even with the fastest Rust/C++ frameworks, so must be some awfully light baggage
Admittedly I don't have a lot of experience with enterprise web development in Java but I love working with ASP.NET Core and I think it's come a long way since the mess of Web Forms and classic ASP.
On the spectrum of client-server rendering, I am leaning very far into the server-side philosophy.
Right now, we use Blazor with server-side rendering for our internal dashboards. This feels almost perfect to me. There are still some rough edges and I still have to ultimately deal in terms of HTML/JS/CSS.
I've got a side project that takes the Blazor server-side concept to the absolute extreme. But, I haven't had much time to work on it this year considering... Hoping to get back into the crusade in early 2021. Looking to pilot it as a replacement for one of our Blazor server-side apps in 2H 2021. One of the bigger objectives is to develop a web application that is perfectly auditable and secure as possible. If you render final client views on the server, you can DVR precisely what each client is experiencing. The client source can be ridiculously lightweight and betrays nothing regarding the business. My prototype currently serves a ~50kb single-file HTML payload that runs the entire show for each client. I am looking at things as deep as server-side rendering of the mouse cursor. Looking at each client as a simple event stream from the server's perspective is a really neat way to organize this problem.
My experience with LiveView has been similar. I can highly recommend looking into Tailwind (and Tailwind UI). For me, it solved some of the pain of still having to deal with CSS and various preprocessors for it.
I can now create entire working apps using just Elixir code and the utility-style Tailwind classes within my server-side templates. I still need to write some js and css, but where LiveView reduced the need for custom js to a minimum, Tailwind did the same for css.
With these 4, it divides responsibilities very cleanly/pragmatically: Rails is your app framework. view_component is how you divide your view into an organized/flexible structure. StimilusReflex is the "reflexive" bridge between the 2.
What happens when you need just a sprinkle of javascript to re-render a component when websockets/StimilusReflex are too slow (ie: user interacting with a color picker or something)? You could use Stimilus.js to sprinkle this interactivity... BUT you might end up with duplication between your view_components and your Stimilus.js controller. If you use webcomponents, then you can follow the open/closed rule: Your view_component can only interact with your custom webcomponent and then your webcomponent knows how to render/re-render those custom bits. So it doesn't matter if that webcomponent is rendered from: initial page load, StimilusReflex, or rapid js events (before they are throttled to StimilusReflex), it's all goes through the webcomponent "front door".
I've had great success with the Turbolinks + Stimulus approach. There are a couple of common patterns that you'll reach for, namely, lazy loading content (basically a <div> with a URL attribute that you have Stimulus load via AJAX) and really leaning into Rails remote-link / server javascript responses for modals and little page updates.
It's so great to still be super productive and be able to crank out several pages of an app in a few hours vs most of the React / SPA codebases where you might send the whole day on one little component.
This is my go-to as well, and it really has worked out quite well for me. Everything feels super responsive and I find it very easy to create reusable stimulus controllers.
I come from the "js sprinkles" approach that rails has always favored, and this feels like a logical next iteration. I sometimes wonder why Basecamp doesn't publicize Stimulus a bit more; I really only learned about it at rails conf. It feels almost like it could be a part of rails itself, and it's the kind of thing that is useful for almost any full stack rails app that does server side rendering.
I am using the combination too. You really get a very near SPA feeling with a lot less effort.
BUT basecamp libraries (turbolinks, stimulus) are horrible open source libraries. There are not any changes or bug fixes for months now (for both) and nobody does know whats the actual state, whether they are abandoned or basecamp is working on something new. Then hey.com released and people found new features in both frameworks, so one day they will release (again) completely reworked versions of these libraries they have been working on in their private repositories.
Both libraries are really good together, but they would be better if developed more open by the community. They are basically abandonware the day they are released, until one day a complete rewrite and major version bump is released.
curious how you would implement refresh of a single row in a table after a job has completed? have enjoyed this combination too, but I feel like you'd have to subscribe to multiple ActionCable channels to do this?
StimulusReflex can do something like this quite easily. It re-renders the entire page (suggesting that you do a lot of fragment caching so this is fast) and then diffs it on client side with morphdom (IRC). I believe you can do partial rendering now, but I haven't tried it.
The fundamental problem is not that the SPA pattern is bad, is that it takes a lot of skill and effort to make a proper SPA. Obviously, skill and time are scarce resources and the result is that most SPAs are crap.
OTOH all these component based frameworks have definitely brought us a much better way to produce interactive experiences compared to the jQuery days. This is not related to SPAs at all. You can use React/Vue/etc in a multipage application. The problem is that to hydrate (make interactive) the server rendered HTML you now need to duplicate your markup between your server language and front end framework. The solution is to use the same framework in the backend and the front and write the components just once.
No, it doesn’t take that much skill. I think this is the source of the problem.
The stupidity and hostile fear of originality I detect in this these comments is a very real reflection of my professional experience. This, dependence on frameworks and inability to imagine anything beyond the common SPA, really scares the shit out of me knowing I will be coming home from a military deployment soon and returning to the corporate world are scared to write code.
Whenever non programmers ask me what programming is like I tell them it’s full of the most insecure people you can ever meet. People are afraid to do their jobs, need a framework for everything, and always talk about how hard it is (when it isn’t).
Uhh that is not how it works. I can write code but if you tell me go and write in assembly or C, I cant because I think its outside my circle of competence.
I have been writing software for 20 years. It's exactly how it works. You can, after a necessary onboarding period, accomplish the task you were hired to accomplish or you can't.
Agreed. This is indeed how it works. Changes in company culture trends somewhat lend to the perception of newer programmers that they hold competency beyond what they do, which contributes to the overall problem of software being hard and an unstable industry.
I used to not be able to code, and now I can. By that I mean I used to not grok the purely abstract domain of encoding meaningful computation and only the concrete domain of when I type a certain sequence of keys in ${languageX} and press some other stuff, stuff happens.
Now that I better understand the abstract domain the concrete domain of programming languages translate, I am much better able to pick new tools up and understand whether or not they should be picked up.
Recently I got to work with hybrids.js (webcomponents) which makes the hydrate part nice. Because it is normal markup with whatever data you have on your SSR-pages. It simplified quite a bit.
> The fundamental problem is not that the SPA pattern is bad
While it's not bad, SPA forced you to do view validation twice, one on backend and another on frontend. (Generally)
Let's say that in HN not everyone can view upvote score of comment. With classic server rendering, you can put the check on view directly (though not recommended, it's sometimes useful to cut corners) and it's guaranteed that those users won't see the score.
OTOH you can't do that with SPA, since you can easily sniff the content via API response.
Well, we could go back to the original model for the web, using REST & HATEOAS without arguing or even really thinking about it, by building hypertext-based applications.
Of course, the hypertext we have, HTML, wasn't completed, and it leaves a lot to be desired. I have tried to fix that with htmx:
One quick question: How should I be handling server side errors with this? Like, normally I’d check the status code coming from the server and throw up an error modal if it’s an error - what’s the equivalent mechanism here?
I wouldn't consider it particularly elegant, but it's straightforward and has worked well. It uses the "complete.ic" event to trigger whenever an intercooler request finishes, and then uses the response status/code/text to display an appropriate message.
That's where I see a lot of potential. With Stimulus Reflex I have the convenience and development speed of Rails and I can enhance it with StimulusJS sprinkles and reflexes to create great experiences for the user. The interactivity I get is enough for 90%+ of the webapps out there and the complexity to do it is far less.
How do you handle things like menu pop-ups and toggling/hiding content? I keep wanting to use htmx and it's like, but there always seem to be very common tasks like having an expandable menu on mobile that don't have a great solution in these libraries and I have to write vanilla JS.
I've settled on Alpine instead for the time being because it has some data management built in. I'm thinking I'm going to have to switch to Vue on my current project because I feel so much less productive with Alpine.
htmx is focused on increasing the network-oriented expressiveness of HTML, rather than on pure front end enhancements.
For something like menu popups or toggling content on the front end, I would expect an application to use a front end framework like bootstrap, or perhaps WebComponents, or a scripting solution like Alpine, in conjunction with htmx.
Alpine and htmx complement one another nicely, particularly since htmx 0.2.0, when we started firing kebab-style event names.
Basically, you can toggle visibility on your pop up menus or modals with standard CSS/HTML. The visibility class/attribute can be controlled declaratively or imperatively without much JS. You can even use CSS transitions to add nice smooth animation to your menu/overlay.
Me too! As a developer mostly with desktop software development experiences, htmx is so much straightforward to understand and easy to code, it's a perfect chose for me to implement pages such as 'License Upgrade', 'License Renew', and so on: https://docxmanager.com/miscpages/upgrade-to-standard-from-b...
Do you have any examples or favorite articles about using it? I'm exploring the space, but have really limited time to sit down with all of the different options today. HTMX is one that is very intriguing.
I'm building an app with a TALL stack now (Tailwind, Alpine.js, Laravel, and Livewire) and I am incredibly productive. Very little build step required (to compile Tailwind to reduce the size based on which classes are used in .blade.php files). CRUD, Image uploads etc are so easily done I am such a fan. I was skeptical at first, but now I love this way of building web apps. No idea how well it scales, but for a simple MVP I couldn't have asked for a better stack.
Seriously, after all these years of JavaScript,SPA,react,redux craze, we're back at PHP, css and minimal js all over again. None of those new js frameworks allow you to build custom web apps faster than Laravel or Rails. I'm really curious about what kind of side projects all these people are building with e.g. next.js alone. Is there any kind of web app that doesn't need authentication, authorization and database access?
I write full SPAs at work, professionally. We use React, Apollo, GraphQL, Webpack etc. the TALL stack is such a breath of fresh air. I can't even begin to explain my joy. Just joy.
I now dread the time I have to write those infinite lines of JS, conflicting dependencies, slow build times, React hook state management etc.
When I speak with my colleagues about tech, they all seem to love the entangled mess of JS dev and willing to jump on any new framework that gets released. I always push for good old reliable SSR. Often get called the weird one for being young, yet favoring old school tech, though.
I guess I prefer a better developer experience and shorter time to production/market, rather than spending days trying to setup a project and figure out weird quirks and issues with such a complicated mess of a "serverless, modern day web application".
Regarding auth and db, the ones I've spoken with that prefer JS way of doing things like to combine a bunch of existing offerings into one, eg Auth0 for Auth, Prisma for DB and so on. The more potential points of failure, the more attractive it seems to them.
When saying that Laravel/RoR gives you all that by running one simple command, I get blank stares. Hard to believe, I know.
We've come to the point that being able to run code and render html on a server is considered a new feature (aka SSR and serverless functions). I recently watched the Next.js conf and i couldn't help but giggle.
-Do you want functions?
Use our proprietary platform
-Do you want to store content?
Use cloudinary, aws
- Authentication?
Auth0,firebase
- Database?
Use FaunaDb and our super cool new query language that nobody knows and cares about.
> Congratulations. You've built your new webapp on Jamstack. Now you have to manage large bills across hundreds of 3rd party services, vendor lock-ins. Also good luck trying to reproduce all that on a development machine or organize your code.
On the other hand you can just: laravel new project-name --jet and deploy on a single linux machine or heroku and you get:
-Robust and customizable Auth, password reset, 2fa
-A serious db like PostgreSQL and an orm
-SSR by default with 0kb bundle size!
-Any css tool you need
-Easy APIs, tokens and permissions
-Truly open source.You have full control of your code and data
So yeah it's just a command but yikes, who uses PHP in 2020, right?
The problem is then you'd have to use PHP or Ruby. Much as people say they've improved, they're not better than TypeScript. I wish someone made something like Laravel for TS. Sometimes I look at Laravel and think, sure it's great that they did all of that and are even making a bunch of money, but why did it have to be PHP of all languages?
Ah, sorry. I don't have much experience with RoR. Laravel, mostly, and Laravel auth is one command away. I assume RoR won't be too far away from that, too.
well, whenever I did rails, that was pretty standard. Laravel's baked in auth, is probably why it's so much better than rails, that and queues, telescope, etc... all the nice to haves that come standard in laravel that are extra in rails apps.
For all the speed bumps PHP 7 delivered Laravel typically scores lower than Django and Rails on Techempower benchmarks. This and the reality that PHP roles typically pay 20% less than Ruby, Python or Node has led me to ignore Laravel.
I haven't built anything that requires significant performance tweaking other than some caching and SQL optimisations. Maybe PHP 8 will bring an even better performance when it gets released
> HP roles typically pay 20% less
Yeah, true. Hence why I am a React dev professionally
I'm not saying any of Django, Rails or Laravel are fast compared with Node, ASP.Net or Spring but what surprised me was how PHP 7, which is a lot faster than Ruby or Python, somehow managed to fall behind when Laravel was added into the mix. It's as if PHP's performance gains only really apply to raw PHP or lightweight frameworks.
That's surprising to hear. Do you have any links that go into this (or show benchmarks)? I generally avoid PHP, but I've been thinking of looking into Laravel for when I do need to use PHP.
I have a theory. In the old days web development was looked down upon by "real" programmers. But then the WWW really took off and people started moving over from the C/Java world into the world of JavaScript. These are the people that insanely complicated web development when they tried to make it as complicated as the development world they came out of.
JS is quite a hacky language. Trying to turn it into Java neuters some of the benefits of being hacky (I'm looking at you, Typescript).
But at some point, it's neither a hacky Lisp like language, nor is it stable like Java or C++. It's just a middle ground that does these things poorly.
It would be nice to have a new programming language from scratch built to handle all the async/reactive mess. But until then JS is our best bet.
I hear this so often, so as an exclusively js/ts dev I have to ask... what about this is “done poorly”?
```
try {
return await fetch(url);
} catch (e) {
handleError(e)
}
```
Because that (metaphorically speaking) is 90% of what I write. Throw in the occasional closure, `map()` or `reduce()`
My day to day headaches are never from TS, and always from unpredictable DOM api/layout/style behavior.
I guess I just don’t have any frame of reference for how “nice” a language can be. Is it just that JS has so many “bad parts” and footguns that I’ve gotten good at ignoring?
Fyi return await is redundant and is just creating one additional callback you didn't need (b/c you will have to call this function with an await anyways)
I think there is another layer to that conversation. Frameworks become bureaucratic and boring because they are developed by large teams for large teams. Most developers are working on small projects and need more fun and less maintaining huge amount of boilerplate code that recreates the browser.
The framework that I feel makes development less ugly is svelte. But still, I really don't like the idea of heavy client side websites. It really makes everything more complicated and the user's device slower.
I love the simplicity of Turbolinks,
I love how clean svelte code is and
I am trying to figure out the "glue"
The only weak point for Svelte is there is no inline template composition (yet; already has a rfc). Anything that doesn't look like function is weak compossibility. Imagine recursively calling template itself with a bunch of condition logics, each branch renders different nested templates, and each of those calling its own ancestor root. Svelte is a joke to this thing.
I agree with you. I did give a look at Inertiajs just because Laravel was friendly towards it. Looks sweet.
Right now the "glue" I came up with is a silly connection between the server side router and the "Entry" component of Svelte. It works, it's 10 lines of code, it's clean and the website is super fast. But, both Inertia and my solution feel somehow "hackish". Not a proper pattern.
But really, anything to avoid client side routing, state management and auto-magic code splitting.
Wow, InertiaJS looks like exactly what I've been wishing for. Component-driven SFCs in Vue are the best tool I've used for interfaces in 20+ years, but I don't love having an API layer between the client and server. This is really interesting. Do you have experience with it and know how it handles frequently live-updating pages?
I used Turbolinks for the first time on a new Rails app about 2 years ago and was floored by the impact - it felt like a SPA in terms of no page loads and overall speed.
I'm convinced that this is the solution for the majority of use-cases, combined with selective usage of either React components or something like Stimulus where you need more sophisticated UI components.
If you can do server side great: do it. The difficulty is accurately predicting how much JavaScript you'll end up writing. If your client-side JavaScript is comparable in complexity to your SSR you'll eventually end up with the worst of both worlds.
If a client tells me what they want built, I bet I'd be able to roughly guess how much js there's going to be, probably within a few hundred lines for a, say, 3 month project.
The larger the project, the large the margin of error, but still, it's really not that hard for the vast majority of work we do. Or at least I do, e-commerce, enterprise apps, etc.
That sounds logical if you are doing client work with a defined scope. From personal experience working in the startup world, which I imagine a lot of posters on here are, you don't always know what you are building or what the end game is.
I'm not sure that's such a huge problem. HTTP routing provides a wonderful architectural seam so that we can use different solutions for different domains like `/profile` and `/document-editor`. We can create rich client-side experiences without creating a monolithic SPA. And as long as we make sure we have those architectural seams we have the flexibility to decide.
Organizing my HTML into pages is the easy part. The hard part is the rich client-side experience: things like building a sortable/filterable table with a rich datetime picker where the data is displayed dynamically in a chart with zoom/pan capabilities. That's why I build SPAs: I've already bit the bullet to get highly dynamic client-side code, it's just as easy to construct my pages client-side as well.
My argument for SPAs rests on the completely subjective yet I feel incredibly powerful impact of latency. When anything takes more than 50ms to react, it becomes mentally jarring to the user. Whether it is typing in an SSH session, clicking a menu with a mouse, auto-completing a box, etc - all these things generate a completely different human response and relationship with the application if they get below that threshold of latency. In my experience, it's nearly impossible to get non-SPAs across that threshold for large sections of their functionality, while for SPAs it becomes more or less the default. So people can make all the technical arguments they want, but to me the human factor of that subjective feeling trumps it all.
That's not possible for SPAs outside localhost. Every single SPA i know is clunky, including gmail which is probably one of the most barebones. The lack of visual indication that something is happening or downloading alone is infuriating with SPAs. Most of them reinvent the browser in a very poor substitute that invariably fails to both be practical , and to mimic a native mobile app (i think the latter is the reason for so many SPAs.
SPAs are like CGI.. you only notice it when it is bad. I can assure you, you use many SPAs that are not slow and clunky, you just don't realize they are SPAs
I notice all of them , 100% of the time. Twitter has a fairly good one, but even that s very clunky. I'm not saying they are all slow, but they are clunky, jumpy and unpredictable, which in many ways is worse than a webpage like HN that is 100% consistent, but sometimes times out. Predictability for me is more important than pay-it-later responsiveness. E.g. when i press the Pay button i 'm OK with waiting 20 seconds for the payment to process. What's unacceptable is the button standing still, not doing anything until a few seconds later, at which time i ve pressed it 10 times.
Because like other commenters that chimed in, I can notice most SPAs, and all SPAs I've used had annoying performance problems. By problems I don't mean a sudden request that takes half a second, but that every UI interaction feels subtly slow.
I don't know where people get these performance targets of "below 100ms and users won't notice". I notice. I notice if UI responses take longer than two-three animation frames, the same way I notice when a game is running at 20 FPS and not 60 FPS.
Fastmail is laggy in places, but mostly OK. But I'm surprised at you choosing Google Office as an example. Google's office suite is a poster child of slow, clunky SPAs.
>including gmail which is probably one of the most barebones.
And yet, it still uses 130 flipping megabytes of RAM, more than Youtube does, as well as more CPU than Youtube does at idle, as well as requiring several seconds to "start up" when first opening a tab. Ridiculous.
I think gmail is a bit of a parody of itself nowadays. For extra hilarity someday, try pressing "c" for compose as the site is loading. Literally seconds to get a compose window showing.
Most of these SPAs you speak about have loading spinners, grayed-out placeholders, all sorts of tricks to make it feel like you're in control while the fetch() roundtrip to populate the page data completes. The borders of your screen may not change, but the page filling with elements after a delay can be just as jarring.
If a non-SPA page loads near instantaneously because the client-side rendering isn't bloated, it can feel just as good as an SPA with spinners.
You can also use tools like turbolinks the author recommended, to turn your non-SPA into an SPA.
Displaying a loading spinner in under 50ms while waiting for a 500ms request to complete doesn't necessarily build a strong relationship.
SPAs can start off with the good intention of being low-latency and responsive. But when you're backed by a web server, you still have to account for it though optimistic rendering, prefetching, and loading states. Getting it right can be an excellent UX, but it's easy to get very wrong.
I've designed and programmed highly interactive SPAs for customers large and small, and the reaction is invariably "I can't believe this is a website, it feels like an app". Being non-technical, they can't put their finger on it, but I know that while the base design plays a part, so does the concrete UI implementation.
I wonder how many of the people crapping on SPAs are actual front-end or full-stack app devs, or otherwise people developing very close to the UX.
I'm sure there are some disgruntled UX devs who are running back to SSR, but when I come to the bimonthly "SPA sucks" thread on HN, for the most part it feels to me as though the critiques are coming from people for whom UX and front-end development are secondary concerns.
> Being non-technical, they can't put their finger on it
This is an interesting perspective. To me, SPAs blur the line between what's happening locally and server-side, so my impatience with slow functions on an SPA page is much greater than the same thing happening on a SSR page. Maybe it's just my conditioning to expect things on a single page to work much quicker as opposed to moving between pages.
IMO, well-designed SPAs should make use of the context-switching function of a complete page fetch in situations where the user might need to wait a relatively longer time for something to load instead of loading bars/spinners.
It's a crap threshold. 100ms is 10 FPS. You can most definitely tell when something is updating at 10 FPS, vs. 20 FPS (50ms). Movement stops being jarring at ~30 FPS (33.3ms), but you can most definitely tell apart that and 60FPS (16.6ms).
When anything takes more than 50ms to react, it becomes mentally jarring to the user
Yes, but you often pay a huge penalty on the first page load, which for many use cases is the most important. I'd be so happy with a turbolinks version of GMail.
> they also wholeheartedly embrace the thing everyone tries to avoid: mutable state on the server.
For most apps in Phoenix LiveView, it's probably best to think of this as a "relatively smart caching layer". Keeping state on the client invokes distributed state concens anyways, and in the case of the Phoenix, at least the VM is well-equipped with the relevant primitives to make distributed state easy, especially. Basically the only cost you have to pay is if latency is a big deal, you are making a trip back and forth to the server, which could be 100s of milliseconds or more in bad situations... Or even really bad situations, like driving under a tunnel, or being in some stretches of subway.
> How do they do this? Well, a lot of WebSockets, in the case of Reflex and LiveView, as well as very tightly coupled server interactions. As you can see in the LiveView demo, which I highly recommend, these frameworks tend to operate sort of like reactive DOM libraries on the front end – in which the framework figures out minimal steps to transform from one state to another - except those steps are computed on the server side and then generically applied on the client side. They also do a lot more data storage & state management on the server-side, because a lot of those interactions which wouldn’t be persisted to the server are now at least communicated to the server.
Reminds me of Wicket's AJAX support, which is still the best web development experience I've ever had. You keep the state in the server-side session, when the user pushes a button you update that state and re-render those parts of the DOM that changed because of it. It dovetails nicely with the whole framework being properly component-oriented - rather than the page-oriented MVC style, you make encapsulated, reusable widgets that know what their own state is and how to render themselves based on it, and the actual top-level page becomes almost an afterthought.
With all due respect, I don't think the question is "how do we simplify SPAs?"; I think the question is "why do we need to run any app in a browser?" Look at the work associated with the "Next Billion Users" project [1]. Most of our assumptions about how-and-why are merely based on luck-and-whim. We didn't arrive at SPAs through some elaborate Grand Design. I just don't understand what a web application offers that cannot be had with a straight client-server application. Are walled gardens and app stores really tyranny when the garden is limited to a certain company/domain? For example, if my bank offers a client-server app that is walled off from the rest of the Internet, should I consider this stifling? If I was working for this bank and developing their client-server app, should I consider this a step back in my career?
My perspective on this question is that its convenience. If the app is something relatively simple, I just want to use it on the browser and not have to go through the steps of installing software that I probably will only use once. Also using a webapp is less dangerous than installing a program in the sense that it can't read your files, install malware, etc. I can relatively safely use an unknown webapp with confidence, but I would only install a program on my actual computer if I trust the author of it.
I've been building a site with Svelte/Apollo/Hasura. It's been pretty amazing so far. I think if you can get away with not needing SEO it's hard to beat.
That said, something that requires a lot of thought is not just constantly displaying spinners. When I look at projects built by experienced FEs it seems like there's a lot of patterns to work around this that I'm not aware of.
> That said, something that requires a lot of thought is not just constantly displaying spinners. When I look at projects built by experienced FEs it seems like there's a lot of patterns to work around this that I'm not aware of
I'm not sure what patterns you're thinking of. To me this just sounds like implementing caching and background fetching.
Yeah, it's mostly that--generally also some design stuff to hide it. It's just so tedious, or you can buy into a big framework with all the downsides that implies (big bundle size or a lot of tooling for code splitting).
I found Blazor to be a good middle ground between SPAs and frameworks like Django or RoR.
The problem with frameworks like Django is that they have no real concept of UI Components. They have glorified ways of copy pasting html code. The cognitive burden when creating a fairly complex app, like a set of dashboards, is very high in Django.
The reason people flock to React is because, once they develop a component, it's easy to re-use. And that is the drug that fuels them. However, the learning curve is high.
I found Blazor to be a good middle ground, because it has the concept of Pages (Razor pages), which are linked to URLs' and those pages can contain re-usable components. The framework is setup for basic site structures with a lot of options for customization. For example, you have default layout pages, that are included in every output. Great to put a basic menu and footer system.
Those working on ASP.NET MVC and like would find it a breeze to work on Blazor.
Couple that with the option of using C# for the whole application, it is very enjoyable to develop with.
I have used Angular, React and Django. Angular, for me, is too much complication for simple stuff, but perhaps works well with complex apps. However learning TS, as good as it is, was a turn-off for me. React is easy to start with, but as your app grows in complexity, it becomes hard to manage and think about various stuff like state management, routing, etc. Given choice between React and Angular, I would definitely choose React. Django is very good for sites like blogs or info pages, basically wherever there is a large amount of static content, pulled from database. Of all the above, I found Blazor to be very productive. I can also mix and match different paradigms and Dependency Injection is a god send.
Especially since it works with every tech stack with pretty much no code changes required to your existing app. The next iteration of Turbolinks will also work for form submissions without any server side code according to what's been publicly discussed from Basecamp employees. That and partial page changes will be possible with Turblinks frames which is a huge upgrade.
Just be mindful that Turbolinks alone isn't enough to replace a SPA. It's typically paired with Stimulus but you can pair it up with jQuery, Alpine or whatever you want when you want to do more than page transitions, form submissions and swapping content in a specific area of a page.
I guess it's still targeting the client-side SPA case, but I think Svelte[0] is worth a shoutout for taking a different approach than React/Vue/Angular/etc. The Svelte "framework" is really just a compiler that produces a small bundle of vanilla JS that updates the DOM directly (no virtual DOM).
Maybe the main issue here is that while SPAs make life so much better - proper testing, immutability, one way data bindings, speed of development... - for frontenders, they make it worse for absolutely everyone else: backenders that were able to hack a passable frontend or modify a proper one; final users that deal with bigger and slower webpages; devices struggling to render websites; backends that now need way more CPU to do SRR then they ever needed with traditional websites; beginners that were able to look at the HTML & CSS of a page...
SPAs obviously have some positive sides, i.e. state management, but that goes with a price of increasing complexity, especially if server-side rendering is required.
In majority of cases, especially when building tools like back offices it's not what you want, you just want to be able to render forms, tables and save them in a convenient way.
I've found turbolinks + simulus.js combo to work surprisingly well. Actually, I use the only one stimulus.js controller in majority of cases - one that makes an ajax request and reloads the page (with the help of turbolinks) on success. If page load time is fast enough, the use experience is the same as if you would change the dom with js manually. Of course the requirement is that reloaded page reflects the changes.
Other stimulus.js controllers are there for the cases that don't fit into aforementioned pattern. That sounds primitive but can take you a really long way without turning your js into a a monster.
In addition to that you don't need to care about the routing, html validation just works, you can wrap existing html components and even inject react apps here and there if you really need to.
[EDIT] Forgot to say - this approach does not force any language or architecture on the server-side
I'm really liking turbolinks+stimulus. While I'd probably still reach for Vue for more intricate work there is a lot of mileage you can get out of turbolinks with a small set of stimulus controllers. It's not just for Rails either: the Changelog people use Turbolinks with Elixir/Phoenix [1] for their site, including an audio player (although not with Stimulus), and I used it for a recent side project with Django [2].
I use Clojuresript with a react interface to create SPAs for data analytics UI at work. It is much simpler and easy to manage. For me developing applications with javascript tend to get complex over time as the app grows. State management is really simple in Clojurescript. I was able to create reusable components which can be ported over to new projects with little effort. I now find it much more easier to develop the front end in SPA format with CLojurescript rather than rendering the UI structure with server side stack. The app tends to act a lot more faster with SPA as you see in this video https://www.youtube.com/watch?v=islPRG2_1vU
The hydrogen client for matrix.org (1) has an interesting approach using vanilla JavaScript and indexeddb.
From a previous this week in matrix(2): "Hydrogen tries to be the lightest Element. It is written entirely in vanilla javascript (no React, no Webpack) for complete control, structured as an MVVM app, leveraging the raw performance of indexeddb."
This hasn't been my experience. 10 to 15 minutes of looking at React Router's documentation was enough to get started with it. The major version bumps, however, have been a little taxing. If it takes six major releases to get the paradigm right, you should've done more research before moving onto the implementation phase.
A while back I was faced with using whatever god-awful backwards-incompatible version of react-router to use, or instead to go for a routing solution that wasn't as insane. I don't remember what I picked, but I think it was page.js. Something that's been around forever. It was astounding to me how much it simplified the whole project. I truly can't figure out what react-router actually brings to the table, considering it's version history, compared to the myriad of alternatives.
The future is Next.js with SSR first and selective client side hydration. Ignore the jamstack, it's going to lose and go away. Requiring a JS download first to show a dynamic site is a recipe for bad performance.
I would also prefer something like Next.js with partial hydration. But what I want in that is probably a much bigger ask: I want the partial hydration to be performed automatically with static analysis. Something like Svelte (real DOM codegen, no need to ship a library to the browser) would be especially nice, but I don’t want to give up JSX. In the end, this is probably well outside the scope of a framework coupled directly with React, and I suspect that if I want it badly enough it’s something I’ll need to build myself.
I'm not sure if we're saying the same thing, by partial hydration I mean only wiring some specific client side components for interactivity. As in only shipping the react components over the wire that need to be rendered on the client for user interaction. Meaning the overall page is vanilla HTML, but some subtrees are React wired components. Complete with state management that can work between all of them. In which case I'm thinking it would be fine to mark components individually as server/client.
Yes, that was what I meant as well. But as I’ve read about the approaches currently available, all of which require manually opting in per component, I’ve had two thoughts:
1. Manually marking a component for hydration likely means shipping the whole structure below as JS, which may hit diminishing returns fairly quickly.
2. At least if you’re using TypeScript, enough is known (or could/should be) to determine which components are truly static. Next.js already has a rudimentary version of this. That kind of analysis could be a huge DX improvement.
I am currently working on projects with Django (jinja2), Laravel (blade), Node (ejs) and Next. I can't possibly understand how so many people in this thread are preferring server rendering frameworks. As soon as you have a non hello world application (= complex state shared across multiple components / page) a SPA application is so much easier and productive. And Next brings this experience back to public facing frontends because it supports prerendering along other things.
Isn't SSR (or even better SSG) the M in JAM? It's a marketing term anyway and it doesn't really need to "lose", we can just talk about the technologies instead.
Next.js gives you a great toolbag to choose from for each challenge, it's definitely going to stay around and keep growing like crazy.
You can use much of the J and the A during the SSG stage that then delivers the M to the user. And very user-specific and/or dynamic things will have to remain some sort of generated shell and the content fetched on the client anyway.
It's interesting to me that in video games we talk less about frameworks than we do engines. Doing a few tutorials, I was shocked at how little code I had to write these days with something like Unity or Godot.
Breakout indie success stories are often lead by artistic types, since it seems harder to teach a typical programmer to draw than to teach an artist enough code to get by. And I think the scene is richer for it.
Is there a good reason the web isn't like this? Engines, rather than frameworks, that have done the hard optimisations and made the best practice decisions for us?
A case of maintainability? Most games (although its changing these days with loot boxes, and micro-transactions) don't really change ultimately in large ways after the initial development.
A website, often these days, is an ever evolving beast forever changing and adding features to it is (apparently) imperative to ensure that the gravy train keeps flowing. Its one of the reasons why I think there's an increased focus on developer productivity in web frameworks, libraries, and tooling rather than a focus on delivering a better solution.
How often do you deal with blitting textures into vram when working on your web applications? If not very often, that means you're building on top of a stack of software that includes a rendering engine, equivalent in many ways to Unity or Godot.
> you're building on top of a stack of software that includes a rendering engine
Yes that's pretty much a web browser, isn't it? There are many types of engines, such as a business logic engine. The distinction between engine and framework is a little vague but what I'm getting at is the low-code interacting-components design approach, rather than code first. I'm wondering why that hasn't taken off.
People use Godot to create GUI apps, which has problems with the final product but in terms of workflow it works to make something functional.
Games aren't low-code. Even with UE4, people end up learning C++ because there's only so much you can do with the Blueprint flowcharts before it becomes an unmaintainable mess that grinds your development speed to the ground. It's also pretty much a law of nature that every nontrivial game eventually develops a scripting language (unless it's already written in one, e.g. Python).
But it's absolutely true that you can teach artists to code - within the limited domain - much easier than you can teach programmers to draw or write stories.
A personal opinion: I'm not entirely sure that being entirely art-driven is necessarily a good direction for games either. You can build a pretty game out of default UE4/Unity/Godot building blocks, but it'll be rather quickly recognized as a cookie-cutter game with custom art. If you want to innovate on, or even carefully tune, the gameplay itself, you need to code your way out of engine defaults.
Point taken about the amount of code - my point was more about how far you can get with the core structure before having to dive into code. Whereas any React tutorial starts with code. You don't really start by laying things out and connecting the dots as such, you start by typing.
It wasn't so much about art-driven games (I found Gris super boring which would be a prime example, though many loved it), but more about people from different backgrounds bringing new ideas. There are all kinds of programmers, or course, but as a whole I'd say artists or musicians or writers are into different things. As someone who has played games for a long time I usually appreciate something fresh or interesting. IIRC Stardew Valley and Undertale are two examples by people who didn't identify as programmers.
> But it's absolutely true that you can teach artists to code - within the limited domain
I think this is the promise of computing that we've failed to reach yet. Any person has a problem, and with a little effort, they can code up a fix. Coding as a skill like putting up some shelves, rather than engineering an entire house. Some tools get some of the way there (like Zapier, IFTTT, or AutoHotKey) but I'd love to see how far we can go.
> my point was more about how far you can get with the core structure before having to dive into code. Whereas any React tutorial starts with code. You don't really start by laying things out and connecting the dots as such, you start by typing.
I agree, this is a good point and a strong contrast.
Given how the React model is a relatively simple and specific one (DOM being a tree, data bindings forming a DAG), I'm somewhat surprised how little we seem to have in terms of UI builders. React seems to yield itself to visually constructing working, interactive sites out of pieces. And yet, as you note, everything about it starts with code.
(I guess we may be in the "startup phase", where all GUI builders are SaaS tools made by companies, who try to vendor lock you.)
> I think this is the promise of computing that we've failed to reach yet. Any person has a problem, and with a little effort, they can code up a fix. Coding as a skill like putting up some shelves, rather than engineering an entire house.
100% agree. That's why I cheer people who solve their personal and professional problems with an Excel sheet, or a half-baked mix of scripts. That's why I love tools like AutoHotkey (Windows) and Tasker (Android). Coding is a specific mindset, but that doesn't mean that you have to be either helpless about computers, or a software professional. Much as a carpenter or a remodeler would cringe at the way I fix up things around the house with judicious application of duct tape and power drill, but as long as they work, are safe, and my wife doesn't complain about aesthetics, I'm happy (and get to save money).
First thing is to identify whether you're creating a web app or a web site. The latter being characterized by content-driven approaches and workflows around HTML fragments received from editors, aggregators, syndicates, product catalogs, or other third-party sites. These sites tend to be PHP-heavy, but don't have to be; a competent markup processor or "isomorphic" web composition processor (running both server-side and in the browser such as mine based on SGML no less) is specifically designed for this purpose, with straightforward and sophisticated, HTML-aware type checking, composition, templating, and escaping. For the former category of highly interactive web apps, my recommendation would by React or Vue, based on the mindshare of these frameworks. Don't let your devs use these frameworks for the sake of padding their resumes, and try to pin down your requirements and the necessary skill profiles for your team. It might not make sense for an internal app to be created using React when this will split your team into frontenders and backenders with completely disjoint stacks, increased coordinations, and loss of agile job rotation.
This recent hype piece I wrote about ReactiveRails is at around 30k views and counting. In the right hands (experienced full-stack dev like yours truly) the productivity gains are off the charts.
This stack is very appealing though I'm not sure it's quite fully ready for primary use. Has StimulusReflex-CableReady figured out pushing new data to the client outside of the request cycle? Last I looked there was no way to trigger a client page update from a background job for example.
This is exactly what CableReady was built for... to update the DOM from any Ruby process (in the request cycle or not). It was designed from the beginning to update the DOM from out-of-band non-request based workflows like background jobs.
I have been using Ruby on Rails with Vue.js + Inertia.js and honestly my productivity has skyrocketed. I don't worry about writing APIs for the frontend, no need to maintain separate router on frontend and no state management required.
Using this stack I'm able to build a highly reactive apps with minimal efforts.
I saw this SPA performance issue when interacting with Azure and GCP console. AWS console, on the other hand, is mostly non SPA and responds super fast. It didn’t feel jarring and loaded very quickly. It had a questionable UI, but I feel that AWS might have noticed that SPA would ruin the UX of navigating consoles. It is a small thing, but a big reason why I prefer developing with AWS. GCP comes close second in terms of performance with a SPA. Azure just shits the bed and seems like a pile of mess. What has your experience been like?
The missing link, IMHO, is the lack of client side SQL and a sync mechanism. IndexDB is okay, but nobody is going to use it server side. So you kind of always have to end up writing things twice.
Meteor.js maybe, not sure of its focus nowaday, back when I played around with it, you had basically a MongoDB that you could query and mutate on the client, everything was then synced via a pub/sub system with the server.
> I think the idea is that IndexedDB acts as a more lower level store that you can build higher level abstractions over, for example PouchDB.
AFAIK, the problem was that vendors couldn't agree on which version of SQL/Sqlite they would have to support and nobody wanted to write a SQL spec. MS wanted to use SQL Compact, but it's mainly Mozilla's fault if the spec was dropped. The same Mozilla that dragged its feet for years when it comes to implementing some aspects of web components...
But it was a terrible decision IMHO because indexedDB doesn't do what a relational database does and it considerably hurt the development of complex mobile web apps, and now Safari on IOS AFAIK removed support for WebSQL. WebSQL was a fantastic tool for web apps that could be entirely cached on a mobile device (SQL can do a lot).
There is no realistic replacement. Even using Sqlite compiled to WASM has a lot of issues (mainly performances and data persistence).
To this day I don't know a single efficient and performance RDBMS equivalent to Sqlite built on top of IndexedDB. Mozilla certainly didn't build one.
The article mentions using frameworks with React to get features like full stack data loading. If you're interested in doing this without going all-in on a framework, you should check out React Frontload [0]
It's still opinionated, but only on the data loading part! It's just a small library that will slot into any stack. I'm the author.
It's worth pointing out MarkoJS it was basically made for this case for eBay's eCommerce solution. It predates the Next.js of the world but offers similar experience except it's MPA first mentality with seamless isomorphic experience and full working partial hydration for years.
There is nothing wrong with reloading a page. Hackernews is doing it.
On the other hand a page not working because webdevs deemed your browser outdated really is problematic.
This is something I really struggle to understand about the modern web dev mentality. What is exactly so bad about reloading a page? You get visual cues that the browser is doing work, and the page you requested is being downloaded. On the other hand, some SPAs literally provide zero cue when a route has been changed, which is just awful UX to me.
Agreed. For the most part I don't really care, the target audience of those services is probably people who put up with that kind of bullshit. But there are cases when functionality and compatibility should trump everything e.g. banking software or e-governance.
I'm kinda surprised that apparently we've moved on from SPAs.
I've recently started a new project with preact and an express backend, and I'm in love with the smoothness of the dev experience.
Seems like all my problems have been encountered already, there's dependencies and heuristics for everything. I'm just really productive and my app looks great.
Surely, all of that wouldn't be the case, had I tried some random new technology...
I'd say this is perhaps an unknown unknown situation. I've had a wonderful time working with Preact and Express, but I wouldn't go back to it after becoming comfortable with the LiveView paradigm.
Having mostly used Angular and React with APIs, I recently wrote a small app with server-side templating and a very small amount of jquery. It was an absolute joy. Especially with webpack it feels like the front-end has become needlessly complex and I'm not sure it has improved quality for users.
I've seen a lot of SPA vs. "JavaScript sprinkles" arguments lately and I think it is a false dichotomy. You can add JavaScript interactions to a page, to existing HTML, (take a look at Vue for example) without committing to JavaScript taking over the entire page.
Our app (https://domestica.app/login) is a pretty good (IMO) example of a blazing fast SPA. It uses Mithril and copious amounts of chunking to make the bundle size extremely small.
On my browser (Firefox, on OSX) it loads /loading then /login. When I go back, that takes me back to /loading, which forwards me to /login again immediately.
Going back requires holding down the back button so I can skip that history record. Illustrated: https://imgur.com/a/zFaGw4a
(Quite a few sites screw this up. It's very annoying.)
This is great, thank you for sharing. We primarily test in Chrome which doesn't seem to display this for whatever reason. Should be an easy fix to remove the history for the loading page.
This is one of the worst SPAs I've ever seen, simply because it causes an infinite redirect loop when clicking the back button and/or tampers with the browser history to make using the back button impossible.
Just tried the app and immediately found the typical SPA bugs and quirks:
- tried to add a recipe and nothing happened for so long (3-5 seconds) that I thought it was broken. Then suddenly the add recipe form finally appeared. Blazing fast! On your dev laptop maybe.
- click payee, nothing happens. URL changes though. God I hate SPAs
- adding an item to your shopping list causes a 'item added' popup to appear. Over the add item button so you can't add another item without dismissing the popup. It's probably only an issue on mobilr, but this popup is only necessary because it's a SPA
- I'm on the 'create shopping category page'. Try and click the 3 dot thing in the top right. The new menu appears underneath the body of the page...
- on the recipe page, you can click the M? button and it displays a link about 'markdown'. Click the link, it takes you to a blank page, which then redirects back to the homepage after a second or two. Bye-bye all your existing input
- this is more of a bug, but if you try and use markdown in the directions text box for recipes, you can't. Start with a # and all your input just disappears (Firefox mobile)
A few other comments I'd add:
- Get a designer, or maybe buy an off the shelf design. That green is err, not nice. The design looks like a developer made it. I'm not great either but you can get a decent design, based on bootstrap or whatever, for like $50 and just adapt it. There are even free designsz like AdminLTE
- why are the top corners of the input boxes rounded, but the bottom corners square?
- Every time I go to the login screen, the logo pops in later than the rest of the screen. And then the Google login pops in even later. I'm on my mobile so can't check, but that screams bad browser cache settings on your images/static assets
- why are the + buttons slightly elliptical instead of round? Looks odd
I generally like the idea behind the app though, though won't use it as it's all in $s. I wonder how much traction you're going to get though as it's going to be really intensive setting it all up
To add: clicking + on the wiki page does nothing and inexplicably calls out to the https://domestica.app/api/v1/budget/payees endpoint resulting in HTTP error - whilst the UI stays silent and displays nothing.
Thank you for the feedback! We'll take a look at these, especially the recipe form. For the shopping list alert issue, you should be able to dismiss the alert but I see where you're coming from. Regarding the colors, you can set whatever color scheme you want under your account settings. The images have always given us trouble, they should be cached but the painting seems like it occurs after load for some reason (I think it has to do with auto height/width?). It's annoying, and will be prioritized soon.
could you comment on how a cursory use of your 'blazing fast SPA' apparently has multiple serious issues and whether this is enough to warrant a paradigm-shift re your belief on how awesome SPAs are? All this almost feels like a parody an anti-SPA advocate would write.
A certain fortune 500 finance company did their entire website this way using java 6 EE and serving up the data layer with XML -> Hibernate -> DB2 mainframe queries about 10 years ago. Probably still vestiges of that now.
(Jokes aside, I tried this years ago hoping for it to be a magic bullet. Gzipping meant that the bandwidth savings weren’t anywhere near what I hoped and XSLT is a PITA to deal with)
Micro front-ends and packaged business capabilities. Build encapsulated domains (views, events, apis, data store) with their own dependencies (not shared with umbrella app) and pluggable with meta data.
Micro front-ends is the concept of breaking apart a front-end monolith. SPA's are usually built in one framework like Angular or React. Over time, this can get very convoluted and turn into a bad monolith (there are good ones).
The second part is that instead of building screens, you design features (like invoices or customers or catalog) and develop the set of views that encapsulate the targeted domain. This is all built in a separate project with a well-defined set of meta data. You deploy the feature-set into your integrated environment and the managing app is constantly on the lookout for new features-sets.
Then in some admin portion of your umbrella app, you provide access to API's to other feature-sets, and to users.
One of the key elements is to keep front-end dependencies separate. So the manager app might be Angular 9, the customer feature-set might be in Angular 10 with its own dependencies. Nothing is shared between the manager app and each feature-set and nothing is shared between feature-sets.
This provides a front-end that is malleable and reduces dependency creep.
This can also benefit from Domain-Driven Design, where you've segregated your domains and their API's, Events, and Data Storage.
So you should be able to "publish" a feature-set without any dependency on a central database, API mesh, or Event manager.
The key is to define your boundaries well and that is no small thing.
Based on this comment you do kind of sound like one. Could you elaborate on what you think 'Domain Driven Design' is, and what you think 'boundaries' are in the context of development? It would help dispel the impression I get that you're a clueless CTO who knows enough to be dangerous.
(apologies for being a bit rude perhaps; consider it a comment slightly in bad faith, but very curious to hear you prove me wrong)
DDD is a set of principles where you have conversations with your business partners, model scenarios, determine a ubiquitous language, and build software that mirrors those models and conversations. Determining your bound-contexts, their relationship to other bound-contexts, and determining where sub-domains belong. Though this can be confusing since you could have an "order" domain in several bound-contexts with different purposes. It also tends to move away from traditional OO modeling since we're pulling things apart, not abstracting them in order to decouple. I took Eric Evans 5-day class two years ago, but I've been professionally utilizing DDD for about five years including helping build Accenture's new performance management system.
I think the problem is your didn't concretely explain what your method entails and how it is any different from doing a plain HTML website or a Javascript one. Technically, there is a server, and a client, so what do you do exactly on the server and the client and how does it difference from other methods?
It's like someone asking you how does AJAX works and your start discussing functional programming. It doesn't explain what AJAX is and how does it differ from plain server side rendering.
Thanks for the in-good-faith response! I'd say that covers DDD pretty well.
I do still feel that the link between that and your suggestions in the initial comment are still tenuous and vague 'management-speak', but still, thanks for the response :). Obviously I don't know the exact details or how you're running things, so the best I can do is poke at it (in good faith).
You're not the only one. Sounds a lot like a former CTO of mine who couldn't write a line of code to save his life, but made sure to randomly toss out buzzwords and make sure you were aware of what he recently read in what I assume was an edition of Programming for Dummies.
My company talked about using micro front ends for a while. basically there was one microservice that actually served the web page itself and all of the controls and widgets on the web page were pulled from other microservices via API calls that each handled their own internal state.
After 50 years of software development people still a) are surprised that a framework/approach fails to simplify the software development and b) suggest a new framework/approach.
Maybe this is more suited to stackoverflow but, how do you load with turbolinks when the two pages use different assets? Let's say each page have different js script tags.
There are very few SPAs that are tolerable. Even Google can't make a good one, their admin console for GSuite is terrible and has latency out of this world.
The Phoenix LiveView pattern is one that I thought was the near-ideal interactive page architecture 20 years ago, long before Elixir and even before XHR was standardised. (We had other methods for AJAX and server-sent events to achieve it in those days.)
It surprises me that there are so few implementations using this pattern today.
The Meteor pattern is another good one for the user, if you like things better optimised for user-visible latency (as user interactions happen at the client first), but it hasn't ended up with much mindshare now. Perhaps because it's more complicated to use, and perhaps because JavaScript isn't ideal for this.
Svelte's precompiled diffs and minimal client-side code is another pattern I like.
For an application I worked on, I developed a combination of those three patterns. It had a nice and very efficient combination of server-side rendering and client-side immediate interaction, without needing custom JavaScript. This made it fast, user-friendly and pleasant to develop in.
Like LiveView, the client would send user events to the server, which would update affected component model state (which could be shared among different clients too). Server would re-render any visual components which used the model data (using automatic dependency tracking from previous renders), and then send an async event to each affected client containing an efficient diff of all rendered changes. If there was much network latency or too much queued, multiple updates would be batched together, merging their diffs efficiently. The result was the update could be worst-case the size of a page replacement for any number of updates, and at best the size of a few DOM edits in place.
But, a little bit like Meteor (but without custom JS), the server could also push "client event-handling hints" in the rendered model to the client. If an event pattern-matched one of these hints, the update could be applied immediately on the client as well as the event sent to the server. The server knew what update would have been applied, and could account for it in the diff. So if the server's new rendering matched the client's immediate rendering, the diff was empty and didn't need to be sent! But if the server got a different result, for example the event caused an operation which failed, the diff caused the client to receive an appropriate correction.
The diffs were calculated efficiently too, not by rendering components and comparing trees, but by precalculating diff trails from state change patterns where those could be recognised. For example when one model value changed, if the diff would always be just a string change in the DOM, it already knew and emitted that diff directly. This meant diff calculation time was as fast as possible in simple cases (event to output in O(1)), and reduced to a reasonable diff time in the most complex cases (such as merged diffs).
Keeping it always perfectly in sync and always efficient, throughout network outages, reordered requests, lost events etc was quite technical, but it wasn't application specific. Only the framework needed to deal with the technicality.
This was lovely, because it always stayed in sync, it was always efficient for any combination of initial loads, reloads, and sync events, both on slow and fast networks, it worked without JavaScript if needed, had client-side immediate updates for simple logic of a general-purpose nature specified by the server-side, could share model state among different clients and pages, and still didn't need any custom JavaScript for the application.
As the server side was written in async coroutines and Perl, it didn't seem worth the effort of packaging and publishing that implementation as Perl's popularity declined (and indeed coroutines seem disfavoured there as well, even though they work brilliantly).
I still think that's a lovely interactive webpage model, possibly the best. Nicer than client-side SPAs and nicer than server-side rendering. It's pleasant to develop for, pleasant as a user as well, very fast in many different scenarios including poor network connectivity, and great for sharing live state that appears in multiple pages or documents.
I'd like to implement that again in a modern language, but I haven't had any compelling reason to work on a webapp lately.
This was super-interesting for me to read. I developed in Meteor for about two years. I started out a major fan, but was quite disappointed in the framework by the end. I'm now excited about Elixir/Phoenix/LiveView/BEAM and am about to spend a few months getting myself up to speed with that whole ecosystem.
Same boat, I got into Meteor for a while years ago (even built a side project that ran in production and made real money for a couple of years).
For the last year I have been writing Elixir/Phoenix full-time at work on a greenfield project. We have gone "all in" on LiveView and our using it in a significant portion of our system. I have been very impressed with it so far. When pairing LiveView with Phoenix PubSub, one can achieve Meteor-like 'reactivity' with remarkable ease.
The basic issue was that the coroutine package was segfaulty and maintained by a man so lovely to interact with that the node.js core team eventually invented libuv (to replace his ev library) primarily so they never had to talk to him again.
Perl now has http://p3rl.org/Future::AsyncAwait which is (like any async/await system) a bit more restricted than a full coroutine but works beautifully.
If you threw a github repo up somewhere I can think of a few people who might be interested in trying to update it to more recent+reliable perl tech.
But I've found Coro highly reliable and effective, including under stressed and complex loads, for about 6 years. Never found it segfaulty. It also performed well, and the API design always impressed me with its cleanliness and good documentation.
Also I make extensive use of coroutine-local variables (like `thread_local` in C11) which operate at the same speed as normal `my` lexicals. I.e. as fast as possible. It's just a good way that ordinary non-async modules can be trivially made async-safe, without performance loss, and without becoming tied to async or not-async (they become equally flexible for both uses, which is valuable). (The module to make them run that fast isn't public, but the API it relies on to work is part of Coro. `Future::AsyncAwait` doesn't provide the necessary API at the moment.)
> Perl now has http://p3rl.org/Future::AsyncAwait which is (like any async/await system) a bit more restricted than a full coroutine but works beautifully
I'm not convinced. Future::AsyncAwait is a different paradigm, and doesn't do the things I find Coro useful for.
Async/await has the "function colour" problem - every module that might logically "block" has to be re-engineered with a new, async version of the same module. Same applies to any function which might call another to any depth in the call graph which then "blocks". People have argued that the "function colour" problem with async/await isn't really a problem. I'd argue that it's fine if you've designed around it from the start and you don't intend to use code in both async and non-async environments.
That re-engineering can be done, but it's a massive change to existing modules, and then you end up with something that can only be used in an async/await program.
Whereas Coro is non-invasive. I use the same modules in an async server or in a classic process-forking server (without loading Coro and no concerns about async issues), as well as standalone processes (scripts). I choose appropriately depending on the service characteristics (e.g. async is poorly suited to some kinds of loads such as computationally heavy request processing). Also for some things it's good to have the confidence that comes from not loading any fancy stuff, e.g. in security code and standalone scripts.
For that dual-use functionality, as far as I can tell with Future::AsyncAwait I'd need to write two separate versions of most things.
It turns out a lot of things might call something that, via a deep and opaque call chain, might block somewhere. E.g. any code that loads a config file when it's first run. Anything that uses a template, and calls out to a template compiler. Anything that compiles and loads code lazily. Anything that does something as trivial as stat() on a file.
In practice for my web stuff that means nearly everything (as you can imagine from my GP comment). I also make heavy use of JSX-ish templates instead of custom Perl code. It's just cleaner, and they can be targeted to other languages, even compiled to C for speed. Template components end up inheriting the same async properties as the server they are running in. A page may refrence subresources, microservice calls, data fetches or file reads. All fetched or calculated in parallel (calling other services or using multiple cores if necessary), which is good for latency.
There's also a software engineering impact. As soon as you get deep into complex logic like filesystems, and especially with memory allocation inside those, which triggers nested filesystem access, it's just too much work to engineer everything in an async state machine way. That's why the Linux kernel was unable to do correct asynchronous I/O through AIO functions for many years, and that's why everything end up choosing threads instead of AIO in userspace too: libuv that you mentioned is a great example!
> update it to more recent+reliable perl tech
I think "reliable" is misleading. Coro is highly reliable (for me anyway), while Future::AsyncAwait has documented gotchas.
For "more recent", the Perl community has picked its path. In my view it's a less useful path, and I'm disappointed Perl core didn't choose to implement a coroutine mechanism. But it didn't.
With Coro all the modules just work without anything Coro-specific in them. (Just a few coro-local-data annotations, which are ignored when Coro isn't loaded.) It's neat!
As I use the same code in async and non-async contexts, I really don't see me using Future::AsyncAwait and writing two versions of each module.
> If you threw a github repo up somewhere I can think of a few people who might be interested
I never did get that framework to a point where I'm happy to publish, because it's entangled with commercial and private code. Cleaning up the separation seemed like a goal once, but I think I'd be wasting my time now. It's a great shame when you have a big personal library of Really Useful Modules, but sometimes you just have to start again. The ideas live on!
> But I've found Coro highly reliable and effective, including under stressed and complex loads, for about 6 years.
Had that been a more universal experience things might've been different.
I did once try and see if I could get a stripped down version to try and push into core but at the point where I'd deleted 90% of the code and the entire test suite still passed I realised that getting a reliable stripped down version was going to be a problem.
> Also I make extensive use of coroutine-local variables
I've been using Syntax::Keyword::Dynamically where I need that.
> For that dual-use functionality, as far as I can tell with Future::AsyncAwait I'd need to write two separate versions of most things.
I tend to write async by default and then for blocking code I call a blocking version of the API.
> I think "reliable" is misleading. Coro is highly reliable (for me anyway), while Future::AsyncAwait has documented gotchas.
I'll take documented gotchas over undocumented weird shit and an author who refuses to use a bugtracker and has a track record of deleting features if he doesn't like how people are using them.
> With Coro all the modules just work without anything Coro-specific in them.
Or at least they used to. The author no longer supports the past five years or so of perl releases so there's no combination of supported perl and supported Coro that exists anymore :(
well there is also React.NET which actually can use razor pages to include react components. pretty neat. also there is blazor which works with websockets and pushes html to clients.
I think if Chrome changes to the omnibox really click. The need for SPA in general will dwindle. You cannot see the URL so if it is ugly who cares. And if the page refreshes fast enough most end users will not be able to tell you are on a SPA or not.
SPA has nothing to do with URL. If your SPA uses a routing/history solution so deeplinks and history and the back button work properly (and it should), it's exactly, in terms of URLs, like what you'd naively expect from a non-SPA app.
What is displayed in the URL has nothing to do with SPAs. Go to Twitter. It is an SPA. The URL changes on every click, just like basically all modern SPAs.
I love working in the SAFE stack since I'm a fan of F#. Being able to develop for the web using one programming language that gets transpiled to the needed JavaScript and uses the model view update pattern with elmish makes so much sense to me.
Granted, I am allergic to developing any JavaScript myself because of how many anti JavaScript comments I have read on hacker news ;)
When SPA's became the norm and even static web pages needed to be build with React, developing became more and more inefficient. I saw whole teams struggling to build simple applications and wasting months of time, while these used to be developed within a couple of weeks with just 1 or 2 developers, on proven server side opinioated frameworks. But that was no longer according to best practices and industry standards. Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.
I am really happy that common sense is starting to return and more developers are starting to realize that integrated end to end frameworks are very useful for a lot of real life application development scenario's.