> Browsers change. Relying on browser-specific behavior means you’re relying on that one browser at that one point in time. Code to the standard, and test everywhere.
I wish this was listed at the top of the list, in the middle, and at the end. It’s super annoying when a site or application isn’t “supported” because it wasn’t tested in a separate browser (i.e. non-Chrome browsers).
I know it’s not always easy with a fair amount of nuance which the Chrome Compatibility post[0] touches on, but developing the web platform should be done openly, with the browser vendors working together to be compatible with each other and not introduce developer or user inconvenience. Otherwise you end up with web ownership and a fragmented platform.
Totally understand that, I don't like it, but I understand it.
My idea (when looking at the picture a tad narrowly, there's so many variables here) is really when it comes strictly to rendering content, all browsers should be the same. HTML/CSS/JS in one browser should produce identical displayed results/behavior in another. I'm all for the implementation being different, resulting in performance differences, differing external functionality and features, etc. But the core use, displaying content, should not be broken because you aren't using the browser the developer of the site/app was using.
It’s good advice (like most of the list!) but I’m not sure expecting a beginner[1] to “code to the standard” is realistic. The standard (i.e the spec) isn’t beginner-friendly. You learn by trying things out and seeing what happens.
1: Assuming they meant for the advice to apply to beginners rather than young developers - what age they are seems irrelevant.
Okay, this was long ago, but we had to roll out a new top-level website. I decided to be a complete hardass about it. Everything would validate, both for HTML and CSS. I would follow ADA standards as well as I could understand them. I dutifully tried things out in lynx. I even made some print style sheets (a new concept at the time). However, I had almost nothing to test on.
I got a lot of flak for being slow, we should hurry this up, and so on. And yet the emails would come in later from the higher ups, "Wow, this works on my Blackberry!" I had no access to such a device, but plodding adherence to various guidelines, as dull as they were, saved my bacon there. Even the disability folks I could find seemed pleased.
It isn't "move fast and break things" but you can reap a few benefits out of it.
It's so sad that it became "acceptable" to not test in Firefox (as estimated by the number of sites I randomly encounter that don't work in FF but do in Chrome) right around the time that Firefox Quantum happened and Firefox became good again :(
Agreed -- but FWIW a fair number of times I've encountered problems using Chrome which disappeared in FF. As a veteran of the original browser wars (circa 1998-2000) I'm grateful standards have come as far as they have -- and to support Moz at the heart of the open web.
> because Firefox on Linux was never actually that bad.
On Linux, I use Firefox primarily for browsing, but for development I use Chromium. The reason for that is because the JS debugger in Firefox is pretty damn buggy. Some things I've encountered (though they don't happen every time):
- On a breakpoint, type expression in console, hit Enter and it just hangs there without giving you the result. The console will be unresponsive until you unpause.
- On a breakpoint in some part of the callstack, type expression in console and see that variables that should be in scope at that point in the callstack are not in scope for the console.
- Go to a different spot in the callstack and see that the place that Firefox tells you you're at is not correct. It might be off by a few lines.
This might be a very good reason why developers prefer to develop for Chromium/Chrome. Not because they prefer it for browsing or for its performance, but because its development tools actually work.
I made the switch recently, like a month ago, and I've been discovering little things that are just more pleasant when working in Chromium. For example "Copy as cURL", is formatted neater. Firefox puts all the curl options in a single line, but Chromium separates the options in multiple lines.
I left Firefox in 2011 because it was unusably slow on my macbook. My recent experience has been that it is much faster and I am considering switching back.
At work we use CentOS, which only houses the ESR builds of FireFox as it fits Red Hat's intentions. I started here a few months ago, and found out that the studio was using Chrome as it's default browser. Turns out they had made the switch during the 57 ESR branch and never bothered to try the browser after, completely unaware of the Quantum project. Since we were having Chrome issues (which were admittedly our fault) I suggested trying FF again (CentOS now shipping with 68 ESR) as I couldn't replicate the issue. A colleague tried FF 77 on a personal laptop and was caught completely off-guard by the sheer performance difference. I'm excited with 78 ESR shipping on the 30th this month, it'll be a fun time getting all of the improvements over the past year!
It's the snappiest browser at the moment and I love its render as text function and automatic ad blocking on mobile. I've made a permanent switch on mobile, but I tend to miss the Chrome dev tools whenever I use it on a computer.
It's also wonderful that the browser is truly privacy conscious.
Tried it again last week. Still slow, I'm afraid, despite Mozilla literally inventing a new kind of programming language to speed it up!
A lot of it has to do with design, though. ie. when opening a new window, Chrome draws the window instantly and then fills in the UI, while Firefox waits until the window is completely built to display it. Even though they become usable at roughly the same time, Chrome responds instantly to the command while Firefox exhibits zero sign of life for hundreds of milliseconds.
Definitely not true on my system. There is a flash of some reorganization of the windows content that Chromium does not do, but both windows open instantly.
And also: Come on, that wouldn't make a browser slow. Users open new windows once when starting the browser. The rest is about how fast they render pages, react to JS workloads, and maybe how fast they open tabs. FF is more than on par in all of this.
I open new windows constantly. Opening the actual browser though is a whole other story, where the difference is even more striking.
Curious what you said about FF reorganizing the window contents, in my experience Chrome does that and not FF.
It's true JS performance is quite similar. But I've noticed (measured) differences in page load speed as well -- including latency again, the time until it actually responds to the enter key and initiates a network request. (In the network panel this is reported as Stalled.)
Double-check that your initial privacy excitement during setup isn't the culprit. The couple of times that happened to me (Paypal being one) lowering my "Browser Privacy" setting from "strict" or "custom" back to "standard" fixed the issue. I would be curious to see examples of broken sites if you have em.
I feel like I’m using IE in 1995 hearing this comment, but it’s valid. I feel like time is repeating itself. This is one of those tips that never really stop being relevant.
The modern equivalent is tuning uBlock Origin, uMatrix, Privacy Badger, or similar products, on top of correctly configuring your browser itself.
Some businesses are willing to (and sometimes should) take a 30% cut in traffic in order to ship a product more quickly. It's not a company/developer's fault that there are subtle differences between browsers.
Certainly not their fault. What's wrong is that it's possible to get the majority of traffic by developing for a single browser. They should not be sustainable doing that. It should be the case that in order to have sustainable levels of traffic they should develop for the standard. It should be the case that the cut is not 30% but rather 70% or more when compared to developing according to the standard.
At work we're forced to use a service for exchanging files where uploading a file is only possible in Chrome, but downloading a file is sometimes only possible in Firefox. Pinnacle of UX.
Safari on iOs is a bit special. Some features are missing compared to the Mac version. You also need quite a few safari ios specific code if you want to build a web app. But this is a bit broken since Apple wants you to publish to the appstore and not release web apps anymore.
You also need a Mac to debug safari on iOs. I hate Safari.
You don't need a mac to debug safari on iOS. There is remotedebug-ios-webkit-adapter. You need a Mac to debug Safari on Mac. Maybe that's the reason why some people only support Safari on iOS.
Special is right. It's full of non-standard behavior I've had the pain of fixing because you need to target iOS, e.g. needing Safari-specific attributes to capture scrolling in a modal.
Yes but a number of features are disabled or buggy on third party browsers that use WKWebView or UIWebView (i.e. basically all of them). A particularly nasty one I ran into once was navigator.sendBeacon being completely broken with HTTPS urls: https://stackoverflow.com/questions/51844586/navigator-sendb...
Not entirely irrelevant, the thread is about browser support.
Also, I agree. I have a laptop with about 256 mb of working ram, so I only use terminal apps with it. If I'm trying to look up documentation and the site doesn't work with lynx it is not a happy day.
Looks like we are silent friends. The one thing I often get is just a message "JavaScript is not supported" and the other is wacky and long indices at the top. But, just having mywebsite.com/txt would not be a crazy idea...
Seems like a list of every web development ideal that gets upvoted on Hacker News.
Still, I think the list can pretty much condensed down to one point:
Use whatever technology is appropriate for this site or web app.
Because a lot of developers seem to have a 'when all you have is a hammer' attitude towards web development. They learn React/Vue/Angular/whatever, then seemingly decide everything they will ever build will use that framework, regardless of whether it's the right tool for the job.
A blog doesn't need to be an SPA. A static business site that never gets updated and doesn't do anything remotely interesting doesn't need to be an SPA. Your documentation doesn't need to be an SPA.
It's like that guy who was recreating CPanel as a WordPress plugin/install. Sure, you could do things that way, but may want to rethink whether this is really the best architecture to build a server control panel in.
> They learn React/Vue/Angular/whatever, then seemingly decide everything they will ever build will use that framework, regardless of whether it's the right tool for the job.
At least where I live, small businesses contract with agencies that develop React sites for them, and suddenly these non-IT companies are owners of expensive and hard to maintain websites that they don't, and shouldn't need to, know the first thing about maintaining. No one wants to work with the projects, because they were written in, say 2015 React, back when React used X feature that isn't cool anymore, so it gets rewritten by another agency, and so on.
Meanwhile, WordPress on a managed host or a static site would suit those businesses just fine. Then, they wouldn't have trouble hiring to maintain it, or could even maintain it themselves.
If you work for an agency or you're freelancing, please stop burdening small businesses like this.
Serious question: if the business just needs a static site, why are they paying tons of money for bespoke software engineering? Just use one of the millions of good-enough and free site builders out there.
They don't know any better, and when they turn to the experts, they get upsold. Some business owners think they're the exception to the rule and seek out agencies for bespoke work, but again, they don't know any better.
But, yes, I've seen countless businesses for which something like Wix is perfect.
1. Site builders are still hard to use for most marketing staff, if the site should look professional and you need a few small integrations.
2. They don't mind paying for a solution where someone else does the work.
I guess it's hard for marketing departments to pick an appropriate agency and to know the consequences of the technical choices the agency makes. They generally evaluate based on design and then have a few requirements like: resonable performance and support for a handful of features in the backlog.
> Meanwhile, WordPress on a managed host or a static site would suit those businesses just fine. Then, they wouldn't have trouble hiring to maintain it, or could even maintain it themselves.
> If you work for an agency or you're freelancing, please stop burdening small businesses like this.
There aren't many more soul-destroying experiences as a freelance developer than maintaining a neglected WordPress installation that was built by a maverick agency that took the money and disappeared.
That's why I suggested a managed WordPress host, so that they can outsource maintenance of their instance to professionals.
These projects get completely rewritten because they aren't fun to work with for most developers. What is being ignored is that there is an immense pool of labor that specializes in WordPress et al, and they're relatively cheap to hire. Even if these companies don't have dedicated staff to run their sites, many people are capable of using WordPress as long as its maintenance is outsourced to a managed host.
So much this. Bad plugins and outdated versions, cheap hosting with outdated PHP versions, cheap development with bad code quality. WordPress imho is not made to last without regular maintenance.
Honestly, 90% of the work out there seems to be maintaining some heap of shit, whether it is built in Python or Wordpress or whatever. All the better paid work at least. I have a theory that if a company has been around long enough top make money the software will be a mess.
I can't count the times I've started to write some basic internal tool with React only to rm -rf everything 1 hour later and replace it with 20 lines of jQuery/vanilla js
I think theres some validity to "you're the most productive with whichever tools you're most comfortable/familiar with", but I think people don't realize the levels of abstraction that come with web development.
( My learning path was: Angular 2 -> 7 + sucking at CSS, Angular + Tachyons (learning how to style), -> Angular + Custom CSS for everything, to Vue + Custom css -> Plain HTML + Plain CSS. )
Its like the more you learn the more you understand the boilerplate of libs/frameworks, but also understand what these large tools accomplish for you, and then you learn to appreciate how powerful & simple the "vanilla" web can be if you open your mind to it.
I can't count the number of times I had to write a HTML template for a small component with just jQuery and it was horrible every single time. Something like htm+preact [0] would clearly do the job much better and only add 5kB of Javascript to your site instead of leaving an unmaintainable mess.
When I started at my current company I was shocked that they were using Wordpress for 90% of client sites. Now 2yrs in, I’m thankful we are using Wordpress for these sites. It keeps things maintainable and simple. And for 90% of businesses, that’s exactly what they need.
Even as you are strawmanning our community[0] you outline well why it is such such a great community ;-)
[0]: most of us are not saying browsers shouldn't be allowed to run code, only that there's no reason why anyones statical documents should need to run code on my machine.
It's HTML, it's designed to link between URLs. Whether the URL is a file on your filesystem or on the web is irrelevant. If you keep all your resources in the same set of subfolders you can use the same html/ images/ CSS as the web and use relative links.
It's literally just a folder full of perfectly normal HTML files. You don't need Javascript or a SPA to double-click one of those and look at it in a browser.
Even better, if your operating system has full-text file indexing, it'll be able to search them normally.
Edit: incidentally, a common way for this to work is that the documentation just comes with the software itself, so it's just sitting there on you computer. If the software updates, the documentation updates.
Websites like https://devdocs.io/ do this exact thing through desktop Progressive Web Apps.
Basically you keep your markup & source as static as possible, and allow a serviceWorker to cache those files locally. This can also be done with a SPA, but if we're talking docs, you get the benefit of only having to update a specific page if its raw html.
I was thinking (assuming this is one's own stuff), use rsync to download the files to avoid transferring lots of unchanged stuff. But if this is for an end user, yes, there would only be html access.
It amazes me that anyone asks this question. I have trouble seeing why you would want put documentaion in a SPA. Search result perhaps, but the rest of it?
I think it's a part of modern web developer culture. Where I work I started on a project which was already 3-4 months down the road and they were fully dedicated to react frontend and Koa on the backend. The whole stack was based on a react boilerplate the company built, as far as I can tell the idea of doing something other than JS on the server was never tabled. Likewise, the idea of rendering HTML on the server was never brought up. Everything is pure client side JS SPA even in places where it might make sense to just render things on the server or load a separate page.
Many SPA examples will try load bits from the server and fail. And a non-SPA site can easily be provided as a download too, or use a service worker to provide offline access in the browser.
As a web developer, I frequently see these posts about how we should write (or at least transpile to) HTML because it's so much simpler, and the browsers can render it so fast, and it's the right thing to do etc etc.
But they all miss the point: writing with client-side JS based frameworks (React, Vue, whatever) is easier, faster, more versatile. If you have the option of writing an SPA, it's simpler quicker to build this way, and so easy to deploy.
Sure, it renders slower. I've just signed my 25th professional client, and I have never yet had a client who even mentioned rendering speed, let alone was willing to make any trade-off for it.
Server-side rendering or writing raw HTML are over-rated.
This is like a post from an upside-down world to me.
I suspect that perhaps you've learned primarily front-end frameworks for the majority of your career (an assumption, sorry if that's not right), which will make it seem as though frontend is easy compared to all this mystery stuff you're not used to working with.
It's the same for me, but the other direction. I've always worked with server-rendered applications, so to me all that stuff is super simple and working with frontend technologies involved learning a swath of new paradigms and tooling, so it seems like the more complicated one.
I think the learning point for me is simply that you're probably not going to care about what my arguments for server-side languages are, because it's never going to map to your experience of them. Just like I don't agree with any of your general points in the second paragraph.
No, not exactly. I learnt JavaScript in the very early days, when we called it "Dynamic HTML". I mostly built very raw non-frameworked sites until a couple of years ago. I actually found learning Vue (~2 years ago) kind of difficult because of how inside out everything was, and I was still trying to picture what click handlers were being attached where, etc etc.
I was halfway through building a site when I switched to Vue. It ended up reducing the number of lines of code by 30-40%, and made the interactions much simpler to reason about.
Now that I know Vue, I think it's a vastly better way to develop than without it.
Sure, it renders slower. I've just signed my 25th professional client, and I have never yet had a client who even mentioned rendering speed, let alone was willing to make any trade-off for it.
That's because understanding rendering speed is your job, not the client's job. What the client will be saying is things like "Why are my conversions so low?", "My site doesn't appear in search results." and "My friend's neighbour's dog-sitter told me my website sucks because it takes ages to load." It's your job to translate those comments in to meaningful actions, like optimizing rendering speed.
I guess so. Fortunately, none of my clients are in those kinds of businesses.
Which is sort of a weird point. So much of the web development blogosphere assumes that the reader is working on a site that sells some product direct to the public, who are very fickle, and will abandon your site quickly. Undoubtedly that industry does exist, but I've just never been anywhere near it.
This misses the part where with an SPA I have to figure out what data and access patterns my frontend code is going to want, define an API+schema for the frontend-backend interactions, and (often) write data validation on both the frontend and the backend - all this in addition to the data definition I would have to do on the backend in any case.
If I'm able to render things from the backend, I just have to query the data and slot it into the page. No need to design an API that's flexible enough for application use but still inflexible enough to prevent security holes.
Also, if I decide later to update how the data is represented (a common activity in early-stage projects), I only have to edit only one thing instead of two things.
Curious what you mean by the API flexibility Vs security point. To me, the API for a SPA is just another view over the exact same data that would be accessible in the HTML output of a server render (which, indeed, is the 4th point in the article). Your access to that data is gated by a cookie with a token in it in both cases.
On the code level, it's really not all that different - the operations you perform 'directly' through function calls in a pure server render model you simply put behind another HTTP interface.
I get that this mapping work might be a bit cumbersome and take some extra work, but I'm unconvinced that it introduces some kind of security tradeoff - the client ultimately has the ability to call through to exactly the same operations, just getting the data back in a different format, right?
By security I mean pretty basic things like "client should only be able to query objects the user account has access to" and "client should only be able to perform certain kinds of transactions and mutations". On the back end that's not too tedious - just make sure I only query for what I want. But with a SPA I can't just assume the client will make only the "right" queries, otherwise I could just expose a SQL query endpoint to the client and never have to think about APIs again.
Instead I have to figure out what data access patterns my frontend will need and implement those, along with whatever visibility restrictions are required. I'm forced to do the former in the backend (as well as the frontend) because the latter can only be done in the backend.
(Aside: I realize that back in the day people did indeed use database mechanisms like DB users and stored procedures to do what I describe, and then client apps connected directly to the database. But this practice seems to have faxed and isn't readily doable on the web anyway).
> (Aside: I realize that back in the day people did indeed
> use database mechanisms like DB users and stored
> procedures to do what I describe, and then client apps
> connected directly to the database. But this practice
> seems to have faxed and isn't readily doable on the web
> anyway).
This is actually exactly what postgREST does for you. So in that case, you actually can eliminate the backend entirely and run the entire application in the frontend, with the security provided by stored procedures and row-level security. I think Firebase, which is somewhat more widely used, also enables a similar architecture.
That said, I actually agree with your preference for server-side rendered applications. The problem is that from the browser, you can't really do anything that is not SQL -- you can't, for example, call into a C library, query a legacy API, and so on. Of course, all of that can be solved by deploying more microservices, but that not only means you have to think about API design again, it also increases operational costs. Unless you need lots of fancy live-updating things, server-side rendering is still fine, and even when you do need to update dynamically, there's Blazor Server now.
> "client should only be able to query objects the user account has access to"
> "client should only be able to perform certain kinds of transactions and mutations"
The solution to these is equal for server rendered HTML containing {data} or an API just getting {data} in raw form - the request contains an access token which identifies and authorises the user - same goes for mutations on {data}.
The answer to the question 'how do you know who the user is when they request a server rendered page containing some sensitive data?' is literally the copy/paste solution to how to do authentication on your API - as in, it's basically the same code that must be implemented on the backend in either case :-)
The point is it's the same data and mutations, just a different interface over them. There is no security tradeoff of using one over the other.
> it's basically the same code that must be implemented on the backend in either case :-)
No, it's not. If I expose an API e.g GET /widgets,I need to ensure that there is no combination of parameters that will ever return a widget that the user should not see (e.g because it belongs to some other department).
Whereas when I render a web page server side, I just need to write a single, correct, SQL statement that pulls in the widgets that this specific web page requires.
I respectfully disagree: it's the same code, moved around a bit of course, but literally the same logic making whatever conditional statements or database calls necessary to do permissions :-)
It's the same data, with the same auth needed. If it's accessible to anyone over the network you need auth for everyone in either model. The auth code is the same code in either model, only the interface changes.
Looking at your example to illustrate: you have user A and user B, who have permission to see different types of widgets depending on what department they belong to.
--
Scenario 1: filtering per-resource
There is a server-rendered /widgets page that lists all widgets, filtering based on what types of widgets the requesting user can see.
API implementation: /widgets endpoint that returns a list of all widgets for the client to render. It must filter based on what type of widgets the requesting user can see, so that only widgets they can see are included in the list. Same logic as above.
--
Scenario 2: grouping resources and authenticating at entry
There are different server rendered pages for each department, listing all widgets for that department: /widgets/foo-dept, /widgets/bar-dept. You must check the permissions on the requesting user before rendering the page, to see if they have access. If user A can't see foo widgets, you give them some kind of permissions error when they request the /widgets/foo-dept page.
API implementation: Add a dept param to the /widgets endpoint that lists only widgets in the specified department. If dept param is populated, check that the requesting user has permissions to see widgets in that department. If they do, proceed, if not, throw a permissions error and deal with it on the client to produce the exact same UI as in the server rendered example. Same logic as above.
--
If you have resources accessible over the network, and you do auth on them based on user attributes, you have to do that auth either way, regardless of if you're server rendering the data into HTML pages or returning it as JSON or whatever else and rendering the HTML on the client.
I'm really struggling in this thread because it seems like folks are either missing one half of my point or the other half. The idea was that I have to either a) duplicate effort in both my frontend and backend (that's extra work), or b) have no security at all.
The goal of both applications is to display HTML. The duplicated work in an SPA is in the DB API -> JSON API and JSON API -> page steps. These steps involve telling my backend server about how my data are structured and how to interpret them, translating them to JSON (or something else), then telling my frontend the same things. Every SPA I have seen involves doing this work, although some (like Meteor) are designed to reduce it as much as possible. That's the standard work involved in an SPA.
The second half of my point is that you could remove this duplicate work but doing that gets rid of any security. Why is this? Let's look at my SQL API example again. In this example, I have effectively eliminated the duplicate work of making my app server aware of the data's schema. It just runs a query, serializes the blob that comes back, and returns it. So why not do that? Obviously because now I'm trusting the client too much. In a non-SPA world it's fine to trust the entity making queries (for the most part...let's not get derailed). In the SPA world it's not, so we inject the app server in between to do mediation, and we pay the cost.
Very true, but in an l evolving application you often pay the cost of that approach because any small change to the data displayed or the behavior of the application will require a change to both frontend and backend (e.g. wiring up a new API), so in practice I think people tend to build in a certain amount of generality. It's a kind of trade-off between the time to design a more general API vs. the time to churn the existing API when we tweak the application.
You're exposing an API in both cases. In one case, your API returns HTML, in the other case, it returns (presumably) JSON. The format of the response doesn't have to have any bearing on what SQL queries you write or what permissions you check - you can implement the data fetching exactly the same way for both cases.
In an ideal world, probably, or using {insert technology here} that does that natively, yes.
In my experience, it hasn't been the case: designing a REST API (I know REST complexifies the API compared to tailored RCP) that behaves properly for the client is magnitude more work than rendering HTML with data server-side.
The main difference is switching the question from "what does this page needs to render for this user", to "what are all the things the front-end might want to access through this API for this user, and how do the front-end plan on filtering". The former is obviously easier to answer, and as the result easier to code.
I agree 100%, and it all depends on what problem you're solving for. For the 80% case of having an API which just serves one or a few of your own clients with their UIs architected into broadly similar views, I basically just go for an additive model of providing a default RESTful API but then additional view-specific, and then if necessary even client-specific, endpoints on top where this makes sense.
Honestly most of the time you can even just cut out the REST layer except where it overlaps with views by coincidence. For instance in practice there are quite a lot of views which simply get a single resource, or list a single type of resource with paging.
For mutations I am a big fan of just doing it RPC-style. I know it's not to everyone's taste but I feel like abstracting mutations out behind a model of RESTful resources and verbs is, 99% of the time, a waste of time, the notable edge case being where you truly need a flexible API for 3rd parties (both apps with UIs and scripts) where organising functionality in this completely generic flexible way - with the considerable extra effort that you mention being the price to pay - has huge payoff.
> The answer to the question 'how do you know who the user is when they request a server rendered page containing some sensitive data?' is literally the copy/paste solution to how to do authentication on your API - as in, it's basically the same code that must be implemented on the backend in either case :-)
100% agree. But that (authentication) is not what this thread is about.
> The point is it's the same data and mutations, just a different interface over them. There is no security tradeoff of using one over the other.
100% agree again. A properly written application makes no security trade-off here by using an API. But that's because a properly written SPA stack does the work of interpreting the data and it's relationships twice on both the frontend and backend.
If you want site longevity, HTML is the way to go. It's fine if you clients aren't expecting to maintain their website for the next 10 years, go ahead and use the JavaScript framework.
Tell me that React websites today will still be around in 10 years time. With constant upkeep and maintenance, maybe. If a website is just a static website, not making it plain HTML is a disservice. The layers of abstraction will take its toll.
So much this. There is tremendous value in longevity. Writing content which is guaranteed to be unusable in a few years is anathema to me.
On my website I have pages which date back to 1993-94. Still readable just fine. I used to post those links to mailing lists back then and they still get regular hits from people in that community because I've also maintained the URLs constant.
In my personal life I serve HTML, CSS and native, gracefully degrading JS (if any), for the same reasons.
In my professional life though, I realised that 95% of the things I build will be changed in the next year or so. We often inherit projects that were only just finished, and our first task is to start changing it entirely.
So I aim for robustness of maintenance rather than robustness of the finished product. It's inevitable that it will change, but I can anticipate that in the same way a well engineered car can anticipate it's maintenance cycles.
I feel like all agencies should stay away from JavaScript frameworks unless they are willing to be open with their clients on how much time will go into maintaining the project.
My agency regularly take over older JavaScript projects - built in whatever JS-stack was trending at the moment. There are always big parts of the stack that are no longer available or in active development. Not to mention the enormous hassle it is to bring dependencies up to date for a JS-project that hasn't been updated in three years.
Depending on the circumstances we usually recommend moving it to a boring CMS like WordPress unless there are compelling reasons to double down on the JavaScript stack (if it really is more of a web application for example.)
As an industry, this is called job security. It is fascinating to be in an industry where systems requiring high levels of maintenance are preferred by the client.
This only applies to the use case of creating static websites with React. A lot of React websites are dynamic and comparable with websites built using PHP or other server-side frameworks. Even if you write html by hand while using PHP, you still need to maintain the server code and constant upkeep/maintenance will be required.
Amen. React is for experiences where there is interaction. A simple read only site (a la WordPress) is not where React belongs.
Large entities have need and budget for their applications to evolve. The delta for websites is less intensive. For the latter, reliability and longevity is important.
This is the opposite of what I would expect. React (or basically all JS SPAs) uses components which are essentially HTML elements populated by JSON data. It then stitches them together so that they can communicate via JS.
I can explicitly tie together components/html client side via JS (annoying). Or I can simply render the correct html (no need for json routes) while I have access to all of the data on the server. Rendering speed as you mentioned (client or sever), is basically never a consideration.
So why bother explicitly writing javascript to bind html components together when I could just simply render the correct server side HTML?
Ajax is fantastic, but you don't really need an SPA.
If you want to be wildly productive, minimize the javascript you write.
the efforts to write code for a javascript frontend framework vs a backend framework are exactly identical. there really is no difference. you save absolutely no effort by keeping html generation in the backend.
the differences are rather in the architecture.
you can win or loose a lot there, by choosing the right or wrong one.
by creating an SPA you win a much simpler backend, you separate data storage and processing from data presentation.
you win the ability to have multiple different frontends against the same backend, or run multiple backends against the same frontend.
you win the ability to upgrade/replace backend or frontend independently.
you win the ability to develop with multiple independent teams because the connection between frontend and backend is smaller and easier to define.
you win the ability to reuse generic backends over multiple projects...
these are a lot of advantages, if you need them.
but there are downsides.
you no longer have a single codebase.
you need to maintain an API
rendering is slower in the browser.
depending on what kind of application you create, you may need to be careful to not put any business logic into the frontend. (anything with business logic goes into the backend with an api to access it)
it should be clear by now that i am a strong supporter of this architecture.
i may miss some downsides of an SPA and i'd love to get more input on other potential downsides, however, loss of productivity from writing javascript is not one.
You are obviously writing more code for client side HTML generation based on JSON than regular server side rendering... I am saving effort if I need to write fewer files...
Your feature requires a JSON route, html rendering, and javascript that pulls everything together. Mine is just a route that returns HTML. Why is there absolutely no difference in effort? You have 3 files, I have one.
separating data from presentation is not new. It has nothing to do with SPAs
we won't be able to answer this without specifically comparing framework by framework. some frameworks make your code more complex, some help keep it simple.
i have used different backend and frontend frameworks, and i found that backend frameworks do not make things simpler in the sense that you are talking about, if you want to create clean and maintainable code.
sure you can put everything in one file, but the number of files is a meaningless metric. more useful is the number of functions and abstractions. an API is an abstraction. you can do away with that if you generate html in the backend, but the price you pay is with a much closer coupling of interface and business logic that will make future changes harder. in any sufficiently large site you can't afford that and you end up rewriting your site to create the same architectural complexity simply because you need it.
SPA frameworks query the server for JSON and then render HTML right?
Simpler in the sense of what? You prefer to not have all of the variables available when you render HTML? It's simpler to only render client side HTML and to query 4 different AJAX endpoints for resources?
Why are you trying to separate business and interface logic? Why are you displaying anything that the business logic doesn't understand?
The entire point of good code is that I can edit 1 file instead of editing 3.
Back end frameworks don't help you organize code? That's kind of their point?
Modern SPA frameworks (-> next.js) offer a similar 1 file code style and require no (manually written) api routes.
The end result looks very similar to the server side frameworks of old, so it's a bit of a circular evolution. I think only using one programming language (the one that is part of the web platform) and a solid, template free component framework make it incrementally better though.
How are you "simply rendering the correct server side html"? Presumably with some kind of framework and/or template engine and/or programming language? I don't see how that's all that different.
> If you have the option of writing an SPA, it's simpler quicker to build this way, and so easy to deploy.
It's simpler if you only write the front-end, because it de-couples the work with the back-end.
If you are in charge of both front-end and back-end, server-side rendering is easier.
It's a case of Conway's Law. Knowing that, you may choose to have a back-end team and a front-end team or a team of full stack developers depending on the end result you want.
Personally I think it depends on whether the Web is just another client for you, which consumes APIs like native applications, or if your product is entirely web-based.
> It's simpler if you only write the front-end, because it de-couples the work with the back-end.
This is nonsense.
Both the frontend and the backend influence the design of each other(and for good reason). The notion that you can build one without the other and be done with it is just wishful thinking.
The only tradeoff is not rendering speed, you're also required to use javascript, create an api to serve data(depending on the site this may be overkill), ship huge gobs of framework code to users, work with an ever-changing and complex landscape of tools which mostly demand you convert all your code to use that framework, put up with the slower speed, and run js on the server side too for SEO.
There are very few upsides if you already serve pages server side and look at current client side js development.
> I have never yet had a client who even mentioned rendering speed
Because they rarely know about it. And why would they? It's a very technical choice. Factoring in things like render speed is your job when picking the most appropriate solution for the problem at hand. When you're doing this you are placing your own comfort and ease above that of anyone using the site. Google already factors site speed into rankings and with all the Web Core Vitals stuff they've been pushing lately it's only going to be more important. You're failing your clients.
"Nooo you can't just use an SPA, it's your job to know the most appropriate solution for the problem at hand! Google already factors site speed into rankings and with all the Web Core Vitals stuff they've been pushing lately it's only going to be more important. You're failing your clients!"
"I love how slow this site is," said no visitor. Ever.
If your clients are like mine, there are likely a lot of things your clients don't ask for. For instance, accessibility. Do we ignore that as well?
You're also not factoring in ongoing maintenance. What's easy for you to build and deploy might be less so for someone else. React, Vue, etc. are great tools - when they are the correct tool for the job - but they are still relatively niche. If you're not taking a "what if I get hit by a bus approach" then you're doing your clients a disservice.
You might want to try being a typical user for a month or so. Use a smaller monitor, an older mobile device, a slower internet connection, etc. Then take that lens and apply it to your SPA genius.
>If your clients are like mine, there are likely a lot of things your clients don't ask for. For instance, accessibility. Do we ignore that as well?
Well, some of my clients are government, so they do care about accessibility. For the rest, I put in some effort, because it's the right thing to do. (As opposed to rendering speed, which I really don't think matters, within reason.)
> Server-side rendering or writing raw HTML are over-rated.
Just to be sure: These two are not the same and server-side rendering does not necessitate writing raw HTML. In fact, I think that most advanced languages used on the server-side have means of not writing raw HTML. There are template engines and some languages offer SXML, which already provides you with a kind of template engine.
Personal opinion: I really don't like having to write raw HTML in form of it being just a string, allowing for all kinds of markup issues, formatting issues etc. A programming language or used framework/library should have awareness of the HTML tags and prevent mistakes. SXML prevents mistakes in nesting for example. In Racket and I think in Scheme dialects as well, SXML prevents cross-site scripting by design as well. What many many people get wrong is how they split up their documents. They forget about composability and use only concatenation (looking at you, wordpress themes ...). This severely limits reuse of code. Good templating languages try to nudge you into the direction of making things composable.
It’s also fundamentally impossible for any website/app based around more complex ideas.
I get that most of the web and most of what web developers do these days are just regurgitations of the same basic website designs. But man there is some really powerful tech in JS and WASM just waiting for someone to make brilliant stuff from.
any JSON request you make to the server to render html can be replaced by a single request to the server that replaces HTML. It's simpler to do the latter.
No it isn't. Now you need to keep client state on the server. Have you ever used JSF? It's doing what you describe and it's the worst framework I have ever used. Yes it's even worse than using PHP+Apache without a framework.
My third client was a SPA breaking down from its usage of redux. They would pull chunks search results from the backend in a loop (and this could be many hundred chunks of 100 objects each), indexing them in their reducers causing heavy rerenders. But they "needed" to do that in order to build their filters. All in the frontend. The machine (2019 MacBook Pro 13) would go full CPU load.
So yeah it happens.
Their old server side solution was near instant.
Definitely the design flaw was to index it in the frontend in the first place instead of preparing that data somewhere in between to easily be consumed by the client but still draws the picture of how many people approach SPAs these days.
I think only developers care about rendering speed like that. Most people are happy if the page loads in reasonable amount of time. UX people probably know what that means exactly.
Amazon, Google, Walmart, Mozilla, and Yahoo all have numbers that say otherwise.
"Amazon and others found that removing 100 milliseconds of latency improves sales by 1%."
I'm 100% sure there are people who care about 1% differences in sales, and the fact that has been rediscovered independently by different organisations shows that "people" at least "aren't happy" with very small increases in page load times, at least when making purchase decisions. (Whether that extends to people reading your latest blog post or endless scrolling your social media site's newfeed is a reasonable question, which I'm not sure 've got a good enough handle on to have an opinion worth sharing.)
(From an earlier HN submission today: https://instant.page/ - click the [1] link n that page after that "Amazon and others found that removing 100 milliseconds of latency improves sales by 1%." sentence for sources.)
I don't think rerendering an entire webpage on the server side every time a new chat message arrives is a good idea.
There might be people that misuse SPAs when server side rendering is the better choice but SPAs clearly have use cases that are not possible with just HTML.
> I don't think rerendering an entire webpage on the server side every time a new chat message arrives is a good idea.
Sure, but that's pretty much cherry picked whataboutism when the parent comment we're talking about said: "Most people are happy if the page loads in reasonable amount of time".
So apart from people writing chat messaging widgets for websites, do you have data to contradict the conclusions about webpage load times and rates of change of user behaviour that can be obviously drawn from the Google/Amazon/Walmart/Mozilla/Yahoo data?
Rule of thumb I’ve heard is, in order to feel “natural”, you should be at:
- 10ms response time for interactions (scroll, click, entering text)
- 100ms for in-page context switch (changing tabs in a dashboard, loading the next picture, etc.)
- 1000ms for page load
these are obviously rough guidelines — page load in particular is obviously gonna depend on your user’s internet speed if you have any kind of media — but they’re helpful when thinking about “reasonable”
If everyone has only shit sandwiches people won't say they don't like shit sandwiches, but just don't enjoy sandwiches in general. Then when they try a ham sandwich for the first time they will tell that this one is better and they prefer it to others.
Modern web is serving people mostly with nice looking shit sandwiches.
If any of your clients rely on "pay-per-click" advertising, your care-free attitude toward page load speed will simply cost them money. The difference between a subsecond load time and a 5-10 second load time is a lot of lost revenue. Not a lot of people will wait over 5 seconds for your fancy webapp SPA to load. They will click back or just exit.
It has nothing to do with standards, businesses don't care about standards in the long run, they care about making money.
If I write something in Django, I can let the forms be auto-generated to whatever degree I like. If I do it as a SPA, I essentially need to twice as much work, as I have to write a backend to serve and consume JSON and a frontend to display and interact with it. Then you have got the nightmare of debugging the mess.
I am an old developer and I completely disagree with some of the advice. Specifically a lot of older developers think we should be delivering thin client applications which was how the early web did it, but the early web did it for two important reasons. The first being bandwidth and the second being because Sun had a big hand in the early web and sun saw the world as selling big iron that a bunch of dumb terminals with low processing power connected to. It was a dumb idea and Sun partly died because of it. It is a painful user experience and I would dread trying to build modern web delivered applications via the old page-post model. If anything I see today's SPA's as correcting a world gone wrong and finally killing the completely stupid idea of dumb terminals once and for all. They also got rid of flash and a host of other browser plugins.
To sum it up, i was programming before the web existed, I hated the old page-post model, I don't build web pages, I build large apps. I am happy that I can deliver them via the web and I am happy about the state of the art. I think it can be improved on quite a bit, but not by going backwards.
1) Nobody has any idea what they're doing 2) If you think you know more than your manager – you are absolutely right 3) HN is 3 years ahead of mainstream, but 10 years behind the edge 4) React was made by an OCaml programmer 5) if you want to be that good, learn emerging languages (all of them) 5) don't optimize for money too soon, if you follow these instructions you will quadruple soon enough 6) Whatever your problem is that is holding you back – drinking, eating, whatever – fix it today 7) Learn Clojure
8) learn a statically typed language with a good type system
Even if you end up preferring mainstream language X, learning a language that makes you think in expressions rather than operations, and a language that makes you think in contracts rather than knowing the runtime state in your head, is invaluable for learning how to design, structure and maintain a codebase
Agreed. If you grok a best-of-breed dynamic language (Clojure included), and also a best-of-breed statically typed language, you're in rarefied company. And I would add "Learn C or assembly" to grok mechanical sympathy.
What you say sounds interesting but could you give an example of "think in contracts" compared to and example of "knowing the runtime state in your head".
Personally I have never really seen the advantage of staticaly typed languages over say Python, but I do rely on the strict typing supplied by a database instead.
What I mean is that static typing makes interface boundaries (modules, function parameters and return types, data structures) explicit, and those boundaries are enforced automatically. Those boundaries become a contract, where a provided interface and its call sites must agree on in order to be run. This is like a million little unit tests you get almost for free (obviously you have to write the types, but they're usually declarative and very expressive).
But it also serves as documentation of how any part of your program's state is structured at any point. To the degree a type system is sound, and to the degree your types are appropriately detailed, you can be assured the documentation is correct and up to date as long as the program compiles. This allows you to eliminate the question of "what does this data look like?" when you're reading code, and focus more on things like whether the logic is implemented correctly.
Since you mention it, I'll add that Python does have (optional, gradual) static typing, and a lot of work has gone into the type system in recent versions. It's worth spending a little time adding some static types in a project and seeing some of these benefits. When I last worked in Python, you had to run `mypy` to actually fail on static type errors, I don't know if this is still the case. But you could also use an IDE (like PyCharm, surely there are others) which will at least surface static type errors while you work.
I have been using Python type hinting recently, but I don't find it such a big game changer. Maybe I am just used to writing code without it and it will become clearer if I use it more. Like I say I don't like having an untyped database, and the benefits there aren't things that are obvious straight away.
When I last worked in Python, I also didn't get much benefit, but I wasn't using either mypy or an IDE that warns on type errors. While obviously a different language, it wasn't until I worked with TypeScript in VSCode that I really came to appreciate static typing. And after a few years, I don't think I could go back to working without it.
What is your reasoning behind the recommendation to learn Clojure? I am generally unfamiliar with it but saw it was a general purpose language. What makes it so valuable to be included in your list?
Clojure is indeed mind-expanding. Unfortunately its most compelling ideas don't translate to non-LISPs, so I must sadly disagree with the "learn Clojure" suggestion. It's wild fun, but it's hard to apply the ideas to DayJob.
An example: threading macros. In a language like Haskell, partial application is optimized for left-to-right. `f a` substitutes the first argument; other orders are more awkward. Functions like `map` deliberately place the "most known" arguments first to aid in partial application. If they guess wrong the code gets twisty.
But Clojure's threading macros compose functions, while just letting you say where you want the argument to go:
(as-> [:foo :bar] v
(map name v)
(first v)
(.substring v 1))
Here `as->` creates `v` as a meta-variable which means "the result of the last function". The code seamlessly mixes the declarative and imperative. It's like nothing else.
Sadly no DayJob language has features like this, and Clojure has other weaknesses which become apparent rapidly. Clojure learning will leave you yearning.
> Clojure has other weaknesses which become apparent rapidly.
Clojure has lots of strengths which also become apparent the more you works with it: async, thread-first, thread-last, transducers, core.logic, (partial f a), flexible composition, expressive-ness unrivaled in Dayjob language, hundreds of functions that just work on whatever data you're dealing with, tapping into java "dayjob" libraries without having to deal in actual java. The list goes on.
I've re-written a few applications & libraries from javascript (and other dayjob languages) -> clojure(script). In every case, the lines of code drastically dropped, the functionality expanded, the readability improved, and performance was always on par or better. It's not a silver bullet for everything, but its strengths far outweigh its weaknesses. Like any good mind-expanding drug/language.
> Clojure learning will leave you yearning.
It has been my experience with learning clojure that with each new problem, it has left me yearning ... to learn the more elegant, composable, flexible, robust solution ... in clojure! YMMV.
> Clojure is indeed mind-expanding. Unfortunately its most compelling ideas don't translate to non-LISPs, so I must sadly disagree with the "learn Clojure" suggestion. It's wild fun, but it's hard to apply the ideas to DayJob.
I must say, having learned Clojure (and ClojureScript) and used it in production in my day job (we even still have one ClojureScript project in production), I strongly disagree. I would probably not choose to use it again, mostly because I've come to prefer static typing, but quite a lot of what I learned from learning it (and learning its idioms) has greatly improved my work in other languages.
In particular, Clojure's approach to state is something you can (mostly) apply in most mainstream languages. Handling most runtime state as immutable values, with clear and explicit use of mutable state for the cases where it's either necessary or makes the code easier to understand/more maintainable, is a huge boon for developers working in imperative languages.
In my current job, I've seen the quality of projects improve drastically as I've lead by example with that approach, and as I've asked for changes like it in code review. I've seen the volume and severity of bugs decrease, developer productivity improve, and morale trend upward.
Never used Lisp, but this sounds like a case of offering you loads of cool stuff when writing new code, which in turn will make debugging the code an absolute nightmare. This is based on my experience with mixed metaphor languages like Python. You can do some clever stuff like generating functions dynamically and passing them around, but when you need to debug these bits of code they can be a nightmare. Does Lisp improve on this in any way?
I believe that macros can increase the complexity of debugging.
However, I believe they're much more debuggable than dynamically-generated functions in most languages. There's another layer of translation from your baseline code, but due to the nature of macros, you can usually just step right into them and see the expanded, generated macro code, if my rusty memories of debugging elisp are correct.
Here's one person reading about Common Lisp's debugging infrastructure, FWIW.
for me it's hard to separate it from reading SICP, which as a self taught developer, I feel brought me to a different level of understanding concepts like immutability, the shortcomings of oop, streams, eval apply and so on.
I’ll second this. You don’t even have to go all in and spend a year programming it. Just dipping your toe into SICP/Lisp every once in a while is enough to shock you into realizing that there’s a much broader scope of possibilities in programming that has not been effectively exposed to you or the masses.
SICP really gives you a “full-stack” appreciation for software (from applications down to compilers and interpreters). It’s horizon-expanding.
>3) HN is 3 years ahead of mainstream, but 10 years behind the edge
I think you're using a weird definition of 'edge' here to mean "things that nobody else knows about" rather than the cutting edge of the industry/academia.
Academic publications occasionally float across the front page here that have only been public for a few days. That's pretty much the 'edge' of industry-wide knowledge.
If you want to include things that aren't public, that's super vague and sort of pointless.
Probably because it's easier to quickly grow a free product/platform and then add monetization on top, than have users pay for access or endure a worse ad-cramped experience.
I’ve only seen a couple of repos like this, but I’ve seen enough to agree that this is totally the right answer.
To people who doubt it, keep in mind that 10 years is a very very long time. Rust is not yet 10, Go is just barely 10. React is 7 and jQuery is 13. Cloudflare is 10, Stripe is 10, Hashicorp is 8 and Docker is 7.
Can you point to some worthy examples of this as proofs of existence? And I mean something truly novel, and not simply a different way to do something we already can do.
How many projects are there to automatically API-ify a database with auth boilerplate today? At least a few well funded ones, one recently funded by YC even.
They came up with it in 2015? 2016? and still have a thriving project going with it. Looks like less than 10 people have more than 10 commits in its history:
Having followed the PostgREST project for years, it’s been wild to see the recent explosion of similar sorts of projects (as you describe); I’d very much believed this was a “solved problem”.
And Solr in 2004, but a json store returning json is not really the same as building a highly performant SQL client that works over http with support for operations on ~45 to 50 data types.
>How many projects are there to automatically API-ify a database with auth boilerplate today?
That's a really vague statement but you described phpmyadmin, django, wordpress with a plugin, etc. Most successful CRM companies essentially have some framework doing this underneath as well.
Hi, if possible, could the link please be updated to point to the original post at https://beesbuzz.biz/blog/2934-Advice-to-young-web-developer... ? The Tumblr automatic crosspost isn't really intended to be the canonical version, and I wouldn't mind getting a server load test while we're here.
Sorry about that, Fluffy. Your Tumblr feed is listed first in my reader and was moved to submit it almost immediately after reading it. Nice to see you, BTW. Since I never post about my life anywhere I end up just being a lurker in yours, but I'm glad to be.
> Always validate your data server-side; anything that comes from the client is suspect.
At least sanitize in a way that won't break the server but will throw an error. For internal applications and side projects it's ok to just respond with a 40X or a 50X and move on.
> To the developer, “isomorphic” code breaks down the barrier between client and server.
"Breaks down the barrier" sounds great, but it's actually has been rather detrimental. "Isomorphic" is confusing even to the senior developers. Having a very clear delimiter of what runs in the server and what runs in the client is essential, and it makes your application much simpler to reason about. Take `isomorphic-fetch` for example... a request from a server to another API server has very different requirements and nuances than a request from a browser to a server.
> At least sanitize in a way that won't break the server but will throw an error.
If you need to sanitize to avoid breaking the server then the server is already broken. Also, never sanitize, validate on input and escape/encode on output, but sanitization (meaning removing/cleaning invalid input) is the wrong way.
OTOH, if you use Flow or Typescript, it is amazing to be able to share type-safe interfaces across the stack (not code). I suppose you could get this from any compiled-to-JS language too.
I've been using JSON Schema to define shared data structures like request/response data, and QuickType to generate Go structs (back-end) and Typescript interfaces (front-end), it's been great so far.
The realization that this permits end-to-end fullstack type safety and allows some awesome things (like surfacing breakage at compile time in response to changes to the data model) is a quietly brewing revolution.
This is in constant tension with the point that GP is making though.
If you’re not _very_ careful with your software architecture, you can inextricably tie your frontend and backend applications together. Depending on the size of your product, or why the growth plans are for that particular frontend/backend app end up being, this might not be a problem.
If it becomes one, though, then every bad technical design decision made in that `common` set of modules or packages ends up hurting you as it gets unspooled from (at least) two components that likely have very different architectural idioms.
When you really stop and unpack this, you can see that this is a non-issue. Adding an object or a field to the schema (the most common operations in a growing schema) never interfere with dependents and can be performed independently of client adoption.
The remaining coordination issues can be solved by documenting or using e.g. a '@Deprecated' decorator in Typescript. And, of course, you can always just remove a field to see where the code has dependencies on it (where code breaks during compilation).
The concerns about shared code may apply in some cases, but the global schema is not one of them, in my experience. I do agree it takes a bit of extra thinking to do this correctly but it's really not that difficult.
> To the developer, “isomorphic” code breaks down the barrier between client and server. To a malicious client, it means they have control over the server too.
Huh? I don't follow this. Sharing some helpful util functions between the frontend and backend doesn't allow malicious clients to control your server.
What I was trying to (clumsily) say is that when people develop "isomorphic" code they tend to forget that some of it runs on the client and some of it runs on the server and the interface between the two cannot be trusted.
Form validation is a common example of where things can break, like someone injects malicious data that's already been validated by the client-side code and then the server assumes that the code has already been validated by the client.
And if you don't even know which code is running where, that makes it even more dangerous.
Hmm I guess since it's advice to young developers they might make that mistake but any slightly experienced developer knows not to trust anything client side. Not validating data from the client isn't exclusive to isomorphic code.
Think of transpilers like GWT, where you write java that's converted to in-browser Javascript; or the current trend of server-side Javascript.
(For what it's worth, both are really really cool technologies!)
Edit: But I also suspect this relates to a temptation to write browser-side Javascript without thinking about the server-side or database; IE, browser-side Javascript that just loads objects willy-nilly via fetch / AJAX and creating lots of round trips.
Maybe they are thinking of services like firebase where much of the traditional server logic is now handled in the same js libraries that the frontend uses? But I'm also not sure what exactly the author is referring to.
> Infinite scrolls are inhumane. People need to be able to reach “the end.” There are forms of eternal torment described in religious texts that are less mean.
Infinite scrolls are ugly, but i'm not sure artificially "paginated" text is better. The worst thing about infinite scrolls is that you cannot search for a word in the whole document (just on your visible part of the screen, and a screenful below that).
Why does infinite scroll have to suck on the web, yet you never see pagination in mobile and desktop software?
It makes me think that it's not inherently bad, it's the technology that's lacking.
If the file explorer would split a 1000 file listing into 10 files each it would drive anyone insane. The "Artists" list on my phone's music player can probably be meassured in meters yet it's a joy to use because momentum scrolling is very good.
Your phone’s music player’s list of artists does not use infinite scroll, which explains why you like it. If you scroll to the bottom of your list of artists on your phone, you will see “ZZ Top” and nothing special will happen – you’ve reached the end. File explorers, which you mentioned, also do not use infinite scroll.
“Infinite scroll” means you can never reach the end, because every time you get close to the end, more content is downloaded and your scroll bar readjusts. There is theoretically some end to the content, but most users will never reach it. Examples are Twitter and Facebook feeds.
I really hate the trend with having infite scroll on blogs/news sites. I just scroll to the bottom looking for the comment section and suddenly I'm on a completely new, unrelated article? It's like forcing information into my brain, or someone suddenly placing a newspapper in your face.
the worst is when the "About Us", "Company", "Help", "Contact Us" page links are at the "bottom" and you want to click on them... momentarily appearing and disappearing...
Ironically, it uses tumblr, so the damn back button is captured (try getting back here).
This is my fave:
> Infinite scrolls are inhumane. People need to be able to reach “the end.” There are forms of eternal torment described in religious texts that are less mean.
I do appreciate the author's emphasis on accessibility! However, as you read more bullet points, you start to realize that the author is really just venting about how much they hate modern javascript development (SPAs, VDOMs, client-side rendering, etc.). It gets pretty obvious about 10 bullet points down that this is the overwhelming theme of the post.
Not that any of this advice is bad - a lot of it I agree whole-heartedly with - but it overestimates the amount of influence developers, let alone junior developers, have on the final design of a corporate site. If management/design want infinite scrolling then that's what they get, whatever the developers say.
Advice: don't make everything slow as shit please. Note, this most likely means writing your own trim left function and not depending on zillions of libraries. If you can't write something as simple as this, please don't! Just output a static web-page that loads ultra fast. Thanks young web developers!
> Even if you need to preserve client state between page loads (for e.g. music or video playback) you can let the browser do most of the heavy lifting by fetch()ing a new page and replacing your content container at the DOM level.
Anyone knows of some easy to learn example of such DOM replacement?
Window load events fail to fire this way among other likely problems. You'd have to specifically design around doing it, or do something else like put the whole page in an iframe with your video/music player staying floating outside, or put back inside when it's loaded.
1) Prevent reloading of the page every time the user clicks a link, reducing the overhead of fetching common content between pages multiple times (headers, menus, etc) and providing a more seamless experience to the user.
2) Better emulate the feel of mobile applications, for users who spend most of their time on their phones and don't often or have never interacted with a computer. Yes, those exist.
3) Improved network performance (by sacrificing initial load time). Once the browser downloads and caches the bundle on initial load, the user can revisit the page on regardless of their connection stability or speed.
This isn't to say SPAs are a perfect choice in all cases (they also have just as clear drawbacks), but saying "it's what everyone does" is the only reason they exist is downright false.
> Prevent reloading of the page every time the user clicks a link,
You don't need a SPA for that. And page load/render times are much shorter when html is delivered straight to browser fr most applications
> Better emulate the feel of mobile applications,
That's pretty weak defense.
> Once the browser downloads and caches the bundle on initial load, the user can revisit the page
Giving we're now in the world of Continuous Deployment, I doubt the cache lasts long.
All 3 of those reasons sound very reaching to me.
And here's the kicker - doing a SPA well enough to be seamless and performant is hard enough that most sites suck if they're delivering a SPA.
Also, if, for example, you do your payments or other sensitive pages in a SPA, you're still running 3rd party scripts on those pages for no reason. Let me know when unloading scripts(&their in mem code) is doable.
> Also, if, for example, you do your payments or other sensitive pages in a SPA, you're still running 3rd party scripts on those pages for no reason. Let me know when unloading scripts(&their in mem code) is doable.
Are you legitimately worried about unloading third party libraries into memory on the browser??
Few things annoy me more in this world than when someone "responds" to arguments by quoting half of them to construct and attack a strawman instead of the argument they're supposedly discussing.
Please don't dilude conversations like this, it doesn't do anyone any favors.
The way in which we consume and deliver via internet has moved on from the early days of the web. It's no longer just text, images and links. There's much more to it now, and SPAs are a reasonable step forward to providing richer experiences online with more functionality.
Not sure you could ever call them an anti-pattern.
I'd say a big part of the web is gaming, chatting, interacting. A lot of that content updates every few seconds. Without JS, websockets, ajax requests etc, do you suggest users load the entire page again whenever they want to see an updated view?!
Or is your opinion that it should only be possible to do those things in native, purpose-built apps? Because in that case, how do you justify using an operating system? Perhaps we should exclusively use purpose-built assembly code that you flash to some writable media every time you want to run a particular program!
Threading could be clearer. Less data would need to be loaded on each page load. I can imagine that a lightweight SPA to handle the a few standard views (e.g. listItems, itemDetails, profile, changePassword, submit) would cut down data transfer by a significant margin (20-30%, maybe more). The UI could also be much better on mobile, and we could give users the option to cache certain pieces of content offline.
I don't think printing is a viable solution for most users. I'm guessing most would want to download the comment threads to their phone so they could read them later in situations where connectivity is patchy (eg train/plane etc).
Right. You can do this now by loading the page in a background tab, leaving the tab open and then going offline. Works great.
Or you could reimplement newsgroups (but with less interoperability) as an SPA, which in theory would work great (once you work out the kinks) but in practice would suffer from all kinds of little issues in the from-scratch reimplementation of everything. For example, printing would probably forever be a wishlist item in the backlog.
What you’re suggesting is heresy on HN. It’s true, a little bit of interactivity, fonts/padding/spacing would improve HN. To this day my thumb covers both the upvote and down vote button.
Yes, the UX on mobile leaves a lot to be desired in my opinion. The two things that are going for HN in its current form is the fact that it's lightweight and fast.
A lot of people think that moving to an SPA would mean killing both those aspects. But the truth is, you CAN have an SPA that is both lightweight and fast - the developer just needs to know what they're doing.
I also love when i visit a website with a simple markup and minimal js. Not everything should be a react/graphql isomorphic SPA, but have you checked NotionHQ[0]? The whole thing is a single page react app. People use it to build wikis, manage projects and even create blogs and job postings. All that using client-side rendering. It doesn't load as fast as hackernews but most people seem to love it either way.
* That is the design pattern dictated by your large framework.
* Maintaining state is absurdly simple, but it requires original code if not using a big framework.
* The browser provides a simple standard API for interacting with HTML, but your framework provides abstractions you didn’t know you could live without.
That’s it. Developers twist themselves in knots trying to qualify their opinions as anything more valid than what sounds like incompetence, but with any level of informed discussion it’s clearly about competence (or insecurity).
I'll offer up another reason, which I feel is a little more accurate:
* Your web app requires heavily interactive bits, such as smart forms, smart tables, previews, content editing, etc. To deliver such functionality in a maintainable way, you use a framework. (As a side note, yes it is possible to deliver this functionality in vanilla JS, but to make it maintainable one must essentially build an ad hoc framework.) Because most frameworks push users towards a SPA, you end up following the path of least resistance and developing a SPA.
I would argue most websites never require any of those. When any of those are required they don’t need a giant framework to achieve maintainability. All that’s needed to achieve maintainability is a competent developer who values both simplicity and written documentation.
This sentiment frustrates me. The majority of web developers today are application developers. I've spent the past 5 years delivering business applications via the web as a platform. The web has a lot of advantage as a platform: no install, cross platform, networked by default. But the web was not designed as an application platform, so making it into one is complex. Business applications have inherent complexity as well, and managing all that complexity is a non trivial task.
Posts like this frustrate me because you seem to be suggesting complexity in the web domain is largely incidental.
I would qualify this:
The web is simple when one uses it as intended. (hyperlinked documents that may or may not have interactivity beyond the hyperlinking and embedded media or animations etc.)
The web is complex when one attempts to do "GUI by RPC".
If one is OK with page reloads, this asynch remote folder browser hypercard thingy is fine.
If one seeks to replicate the event-loop desktop "window"-based application one will have the use-case for large frameworks as one will literally be re-inventing all the wheels, which they tend to do, as this is not the default behavior of the application we call a web browser.
Complexity in web products is largely the result of developers who don’t care because they need to ship a product only if that product is written in an extremely familiar way. That’s a product of people who don’t know what they are doing opposed to inherent technological impediments.
I form this opinion as a professional web developer with 20 years experience.
I respect your experience, but that does not match up with what I've seen, and what I've seen by proxy networking with other developers in my area. I do personally think simplicity is a virtue. I always try to collaborate with our business team to deliver only what is needed, in the most straightforward way. But if the customer needs a 20 field form, with logical dependencies and validation between fields, as well as an interface for searching/filtering/editing entries, there is only so much that can be simplified. And yes I know all of that can be done with vanilla JS, but I've done it both ways and I prefer the framework.
Just to chime in here. To be clear, I think dropping a vdom implementation into a page is a good middle ground between full blown “framework” and “roll your own js”. I’m not familiar with vue, but I’ve used React (a library!) to great effect on pages that require that extra bump of behavior.
It’s such a shame that many of these tools are pushed as large SPA solutions when they can just as easily be included ad-hoc over a CDN when necessary.
* And I agree that it’s just madness to wire up your own components once the use case becomes anything more than the most trivial behaviors.
I always enjoy a good faith discussion, but if you aren't going to respond to what I actually wrote there's no need to continue this. I _know_ the functionality I mentioned can be shipped without a framework. I _have_ shipped web apps without a framework. I've personally found the overhead of designing and enforcing an ad hoc framework to be not worth the effort.
...almost... i'll accept "an SPA on one server-side route for that one complicated page that's full of bells and whistles."
and then I'm still going to say "yeah, i'm roughly equally verbose and spaghetti-like in even the best most prescient framework from the gods themselves" and merely wish for the means of modularization: breaking my server- rendered page that my javascript "unfolds from" into further little bits that can each have their own smaller less-spaghetti like world without having to try to dynamically load and include javascript on the client.
Break it up. Divide et impera...
Another big reason is that developers want to deliver a specific user experience that is simply not possible with the default page loading functionality of most browsers.
You mean the specific experience when back buttons aren't working? Or when urls don't make sense? Or when you can't just share the content by copying the url because it no longer identifies the conten? Or when you can't say if something is loading, missing, or broken?
This is my user experience with SPAs regarding navigation.
PHP wasn't good, fine maybe, though I understand it's better now.
it was kind of slow, and all the functions were inconsistent, and had a lot of foot-guns. It was easier to write more robust code and model-view-controller code in other languages. (not that it was impossible in php)
It’s hard to make the claim that PHP is a bad language and also claim that very similar complaints don’t also apply to Javascript. They’re both languages defined by their popularity in spite of how poor the language actually is.
That’s arguably the best reason to develop an SPA. But I don’t think the linked article is arguing against that as much as arguing against using “SPA think” in places where the user experience really doesn’t require it. There are web sites which are basically magazines/blogs whose reading functionality relies on JavaScript. Building a web site like it’s an app may give some advantages to the developers, but it’s almost always a slower, and sometimes more fragile, experience compared to just delivering HTML.
Agree we should deliver the html directly to the browser in most of the cases. SPA are often abused and some 'webapp' sites need to load for more than 10 seconds to download the js files...
On the other hand, we want to deliver reactive experience.
LiveView is an interesting approach. It returns the html with all content at the initial GET request, and apply partial UI update from the server. It processes the user interactions and application logics from the server. This way you don't need to explicitly maintain web API at all, because the render and update functions can directly access the data repository on the server.
And we don't need to do validation twice.
Liveview is implemented in Elixir and Typescript, also saw similar implementation in php and python.
Excellent list. I would add: sometimes it's tempting to rewrite an entire site or application to get away from a world of problems, but you could just as easily be walking into another set of issues at the expense of lost time and existing code value if you aren't careful.
As someone who uses a SPA (React) for webapps - I find it very useful. I run into issues that wouldn't happen if building purely html/css/vanilla js and when that happens I fix the issue.
The biggest benefit of using a framework IMO is that you're given a fairly strict structure to work within, which makes organization a lot easier. For someone who isn't a master, having some rules "baked in", helps a lot.
That being said, I wouldn't use React to build a static page/site, that makes no sense.
People are motivated to learn and build what inspires them. Some started web dev because they wanted to make the type of projects they saw in chrome experiments that only run on the latest version of Chrome. Some people see very complex SPA apps and are inspired to make complex tools that can't just be done with HTML. Some get into web dev because they had a passion for browser and flash games. This list isn't wrong but also doesn't take into account the huge variety of web developers.
Advice to Young Frontend Web Developers: Use React. HTML is not a rich enough framework to do client work.
Second article in a row here advocating for HTML purism. That is where I jump in and talk about how React changed my career in 2015. Since then, my life has had a measurable impact because of how easy building apps became thanks to React. There is no app where I would consider not using React including landing pages.
Unless you need your stuff to work on older devices. HTML is really really good at what it does, and often it's good enough. Yeah, don't be a purist, but html and css is just fine for landing pages and static sites.
Yes, and even with React, one can use a static site generator to turn it into HTML, CSS, and minimal JS, so I don't see any reason not to use React, beyond the oft touted ones.
> Use polyfills to support browsers that don’t yet support the standard you’re using.
Unless your target users can't switch their browser (it happens), I tend to feel this is slowing new browser adoption. Developers keep coddling users by making sure everything works, and users have no reason to upgrade. Make their obsolete browser function like an obsolete browser and they might just update.
This is a good list. Not great, and not applicable when you're at a crappy organization with bad traditions--likely many young web dev's position. This is much more useful to me now that I'm in a good position at a great company with a good culture.
Grr... I read through the EmberJS tutorial and thought everything was very cool; only to find out that the whole thing is rendered with in-browser Javascript.
Infinite content can still be paginated, grouped, or whatever you like. For example think of Google search results.
Pagination gives mental boundaries, navigation markers and shortcuts to items.
Reading through infinite scrolling is psychologically addictive for a few reasons.
Another complaint about infinite scrolling is more technical: Implementations tend to forget your location, and/or significantly change the data when you return to the page. So if you leave an infinite-scrolling page and come back to it, you often can't continue from where you branched off.
In addition to not remembering the position and state, sometimes scrolling back down to where you were before takes a very long time. I've had occasions where I've had to scroll for ~20 minutes to reach an item I knew was there because I'd just seen it a few minutes prior, and forgot to use the "open in new tab" workaround when clicking on it.
My bank's phone app is bad for this. Scrolling through my transaction history takes ages. If I want to look at last month, no problem. If I want to check a transaction from last year, I'll have to scroll down and wait for a bit more to load, over and over and over.... Then if I click the transaction to view details, then return back to the history to look at more, I have to do the whole annoying scroll-and-repeatedly-wait dance all over again. For every transaction I want to look at.
And that's a mobile app not a browser, so no option to "open in a new tab". Mobile apps with all these flaws are horrible.
(Thank goodness browsers provide the "open in new tab" workaround for webapps that have linkable items. Pity it's broken by badly written webapps that don't make items linkable.)
The infinite scroll problem I feel comes up most that if there was a useful piece of information in the middle of an infinite scroll it is likely impossible for you to return there with any sort of reliability. You can't save the URL because it probably doesn't encode anything useful about where you are in a most cases. You can't logically organize 'where' this information is because its location is pure happenstance. It is pointlessly ephemeral. I can't start on page 5, I can't start on page 100. I have to scroll and just pray I get to get lucky which is the function of a slot machine, not someone who's trying to present me information. If I _really_ wanted to get that much more content I am more than happy to click the next button.
You don't dislike infinite scroll, you dislike "Social Media Feeds".
Imagine you are on a blog and can scroll back through all the posts chronologically in an infinite scroll. The location of that data isn't happenstance. You can always address a particular post or scroll to a certain date.
"Social Media Feeds" are bad not because they're infinite but because the sourcing algorithm is obscured from you likely to drive "increased engagement" or something nefarious like that. Pagination isn't going to fix social media feeds.
My take is just because you can doesn't mean you should. It seems like there is no such thing as infinite content, and if you feel like you are in a case where there is adding constraints will probably improve your product/ux
I didn't get that sense from the post. I took that they were advocating for basic HTML/JS when appropriate. A reminder that it's still a valid way of doing things.
If you're building an app, yeah use React.
If you want to put some contact info on the internet, there's no reason it can't be vanilla HTML.
Looking over the article again, I'd say your right. It was the statement "The web is built around server-side rendering." that made me feel that way. As a whole you're right.
However, I didn't think everyone was out making landing pages with React and VueJS. Maybe they are.
you could also use gatsby to generate static sites in that case -- imo react offers enough developer quality of life features where it's pretty hard to justify not using it
Why are infinite scrolls so demonized? I like them, especially if they update the url so you can go back to the same spot. Everyone loved it when Reddit Enhancement Suite did it way back in the day, but now Facebook does it so it's evil.
That's the React definition of "server side rendering", where HTML gets generated on the server. Real "server side rendering" would mean generating an image server side and shipping that.
"Give people consistent but random stimulus and you will be habit-forming. Getting people hooked on your product might seem like a good idea, but the tobacco industry feels the same way."
DO NOT Listen to this. Make your site as addictive as possible. It can only help you win at life. More users = more impact = more money = better life. So long as it's not porn, your website won't be as bad as tobacco.
I think this is a little tone deaf to the target audience. Not all developers want to aspire to extracting the maximum amount of resources from users. Some, especially beginners who want to code, just want to build something and build it well.
I wish this was listed at the top of the list, in the middle, and at the end. It’s super annoying when a site or application isn’t “supported” because it wasn’t tested in a separate browser (i.e. non-Chrome browsers).
I know it’s not always easy with a fair amount of nuance which the Chrome Compatibility post[0] touches on, but developing the web platform should be done openly, with the browser vendors working together to be compatible with each other and not introduce developer or user inconvenience. Otherwise you end up with web ownership and a fragmented platform.
[0] https://news.ycombinator.com/item?id=23563525