It's fantastic to see people share their knowledge, and turn theory into practice. It's very easy to spout on about best practices and good ideas without demonstrating how they can be implemented in the real world. And this application does demonstrate some good ideas.
...But some bad ideas also. I think one problem we have in the frontend world is that developers of very small, very simple sites are pulling in solutions entirely unsuited to their scale of problems. They think they are being careful and investing in an insurance against rapid expansion, but what they're actually doing is overspending on the wrong investments, making things expensive that don't need to be expensive, and betting against agility. This seems like a mistake.
I encounter too many teams who are building web applications by bunding a cornucopia of technologies they don't understand and allow "best practice" examples on GitHub and Medium to do their decision-making for them. They disengage critically from their technology choices, which has the short term benefit of getting them moving faster, but the long term cost of never developing their research and evaluation skills.
This is not your fault, of course. People always want a silver bullet, a framework prefabricated and ready for drop-in domain code. And to be fair, most websites are very like each other. I just wish software developers chose their tech a little more critically.
That's just how we learn to apply things correctly. If the first time you use a tool is to build a scale-appropriate project, you will stumble and likely fail.
Imagine you want to sail across the Atlantic and, to acclimate to the necessary tasks, you started small: sailing down the Thames, then sailing to Calais, then sailing farther and into the deeper ocean, before you decide to take the big trip. Now imagine you posted a blog post about this, and the first thing someone pointed out is that you can take the ferry to Calais.
> That's just how we learn to apply things correctly.
That's fine when we're writing personal projects, but don't professional gigs require a greater standard of care?
Imagine you need to need to transport water to your home. You start by consulting best practices and blogs and conclude that the only way to take water somewhere is using a helitanker. This is a dual rotor helicopter with an enormous hold. You consult a building planner to construct a helipad on your roof. You seek an expert helicopter pilot. You spend several weeks poring over different models of helicopter.
You can do a lot of iteration and eventually reach the right result, but that still won't stop your house burning down in this analogy.
As a teacher, I appreciate projects like this. The folks in my classes are "amateurs" learning to code. I can, and do, teach them quick ways to build websites, but that won't help them get jobs in the industry. Many people can build sites with those tools. Companies that hire developers are looking for more advanced skills, and examples like this help (hard working) students learn how to use them. It takes my students at least two quarters to get productive with a tool like React, but once they know it, they have a valuable skill and have hopefully learned some important lessons about the proper way to create a robust architecture.
It's easier to espouse the virtues of code simplicity than it is to actually practice it. However, it doesn't help that "Less is more" isn't one of the core tenets of Javascript programming.
Be that as it may, praise the author for creating a fully-functioning case study from which audiences could learn and discuss.
For me, the technology used in projects like these is far more interesting than the end result itself (I'll probably never actually run it, but I took a good look at the source code).
I think part of it is that the newer generation of developers seem to think in terms of frameworks etc as building blocks. Most of the time the solution is orders of magnitude simpler.
EDIT: Not sure why my comment is being downvoted for pointing out that this exact hobby project has been continually posted in the last couple of months.
re 1) The assumed unworthiness is reading to much into it. Plenty of people point out previous posts. These are both good to read previous comments, and also to assess the spamminess of the post/poster -- how much this is so is up to the readers themselves to decide. I'm glad if someone has done the research for me already.
Online amnesia seems to be a problem with the design of karma-based forums. I used frequent a London bike forum where folks were notorious for answering common questions with "UTFS" (use the fucking search). It was refreshing to be among more "responsible" forum folks.
Tbh I upvote these comments all the time because it's interesting to follow the previous threads. The only differentlce with this comment and others is that they said "duplicate of:" instead of "previous discussion:".
I understand the subtle difference between the two deliveries, but I wouldn't be so dismissive.
The original HN is one of the fastest loading sites I regularly visit. You fixed that pretty good.
Disclaimer: I'm on terrible internet connections most of the time and it really makes you hate all this modern web crud. Often "dynamic" sites don't load at all because while the initial HTTP request eventually gets through, one or more of the following XMLHttpRequests or whatever fail, and a lot of the time there is no handler for retries, or no error handler at all, or moronic timeouts (either far too long or far too short).
I'm guessing nobody tests for this, because everyone develops on fast, reliable internet connections and it's not super easy to simulate, although there are tools, dummynet probably being one of the oldest: https://gamedev.stackexchange.com/questions/61483/how-can-i-...
Is it able to simulate spotty connections, where the connection keeps dropping in and out? Because I find that tends to be the thing that websites and applications most often don't factor for properly.
To be fair, it does seem to be pre-loading the pages the links at the top of the page lead to... Except then it also makes a request which ends up being ~7-8kb in size when you switch to a tab for the first time.
This is a pretty good example of older websites vs 'web 2.0' I guess.
Is this because it's a quickly made demo vs a refined production-ready site, or is it a direct result of the way the sites are made? Can you use GraphQL, React, Express, etc. and make a very fast website?
Handling timeouts and refetching failed resources is one job for the browser, which are quite good at it nowadays. There is no need to badly replicate (with extra bugs) this tedious logic in Javascript for each and every website.
You might need to retrieve your document content from a database. The CDN can phone home every so often to check its caches are still valid. This is basically how we served The Guardian.
There are some oddly bad tempered responses here. The readme doesn’t suggest it’s necessarily the best way to implement HN. It’s not an opinion piece. It just lists a bunch of commonly used libraries, and documents rather nicely how to put them together.
I can see some value in describing alternative or better ways to do what this web app does, or of critiquing the particular way the components are used. But rants about hating the js ecosystem or this HN implementation just makes it seem like you’ve had a horrible day at work. OK if that’s the case, we’re sorry. Go out for a run or a drink.
This is a nice example of the state of the art of a React & Friends stack. Thank you for building it and sharing it. Is it supposed to be completely functional? Login doesn't seem to work for me, at least.
After 1.1 MB transferred, I experience subjectively sluggish page transitions and less-subjective FOUC on a recent MacBook Pro and Chrome over fast wifi. The features in this stack are impressive and appealing, but IMO they are not worth the general performance cost weighed by hundreds of kilobytes of JS abstractions.
I really dislike the tendency to couple graphql queries with components. Is this idiomatic? If you are comfortable with side effects in your presentation layer i’d much rather manage data at a higher level and have a component request the data it needs from that. As opposed to actually making HTTP requests.
What happens when two components used in two totally different areas of a single page need common data? Relying on a cache feels like a hack.
I really love React but really dislike the way I see it being used these days. I think of it as a polyfill for a UI component that should be stateless and free of side effects. Not an end to end controller/fetcher/renderer.
Apollo manages your data store automatically. It uses Redux internally. Because Apollo wraps the queries and mutations, it smartly caches data and only fetches the minimal amount necessary to build the page. Apollo reduces the amount of Redux CRUD boiler plate dramatically (by as much as 2/3 or more).
There are no traps there, everything is configurable. The documentation is really clean. Highly recommend Apollo.
Once you have this stack wired up, it empowers you to be quite nimble with your development.
Arguably that's it's own trap. Convention over configuration has saved millions of man hours in e.g. Rails projects, but a lot of JS land involves sinking a lot of time into the opposite.
You can just stick with the defaults until you run into issues or need advanced features like optimistic updates or real-time communication via websockets. Apollo also makes it easy to long poll if you're having consistency issues for some odd reason.
Declaring what data your UI binds to is not a side effect, so co-locating GraphQL queries with your component doesn't necessarily introduce side effects. You don't have to co-locate GraphQL mutation queries to your component, you can just have your event handlers (or other mutation-creating events) handled elsewhere in your app.
> What happens when two components used in two totally different areas of a single page need common data? Relying on a cache feels like a hack.
When declaring what data you rely on is as simple as a GraphQL query, it's basically like declaring what data goes in your component. It's NOT like building ORM layers directly into your UI components.
If you include enough in your definition of side effect, everything must have a side effect. Otherwise you wouldn’t be doing it. A pure function that adds two numbers and returns the result has left your system in a different state than it was in before it ran.
Data binding has side effects and the implementation of the thing doing the binding may cause things to happen, may cause some state to change. But I still wouldn’t consider the declarative data binding itself to have side effects.
The components are coupled to the structure of the data, not the graphql query. Where would you rather make the http requests? Somewhere in the redux layer? That's basically what Apollo client does anyways, it just reduces boilerplate.
Although I think coupling components with data structures is the smaller benefit. The real benefit of graphQL is not the client architecture, the real benefit is decoupling frontends from backends, unifying disjointed backends, and efficiency.
In the name of efficiency though, graph QL is projecting a graph into a tree which involves denormalizing & duplicating data in the response. This is where netflix's falcor is cleaner. Check out my little graphQL demo repo / read me https://github.com/joshribakoff/graphql-demo I write more about the benefits of using graphQL, and also the down sides.
So basically you prefer something more along the lines of Model View Controller. Me too. When React was first released it was advertised as “the ‘V’ in MVC”. Unfortunately the developer community took it in a different direction.
It's not really coupling queries with components, only fragments. You can build a query with fragments so the query is definitely decoupled.
If two components used in different areas needed common data that would get pieced together through the fragments. Just depends how you architect your app.
GraphQL is a very nice spec for building APIs that have many sources of data and then building a modern frontend with proper caching etc. on top of it.
The ugliest part of it, or one requiring most hacky solutions, is optimising queries. For example, there are some nasty N+1 traps or "oops-I-joined-your-entire-database" problems when using SQL.
JoinMonster helps with this, but it is a bit too much of a "full solution" or framework to just quickly use. You'll also lose a lot of control over your queries which was a big reason for me to move from Django/Rails/etc. style tools to trying out GraphQL.
This project seems to avoid databases entirely and it seems to have an N+1 queries problem when fetching comments. Each comment is fetched with a separate API call and a single HN Post can easily have hundreds of comments... even if cached, this is a problem.
The SQL "WHERE IN (list of IDs)" query combined with smart caching and batching (see: Dataloader by Facebook, JoinMonster) is a decent solution but requires some amount of good old manual work.
tldr; GraphQL is nice but has its own set of problems. Still no magic bullets.
Actually, my point was about losing control over queries when using JoinMonster (say, you wanna do something fancy like Postgres' to_json, array_agg that JoinMonster doesn't support).
However, I totally understand the confusion in my sentence and actually this thing you're talking about is also useful to me :)
Can't you translate GraphQL queries to Joinmonster/Dataloader/Haxl calls? It's up to the implementor of the GraphQL <-> backends layer to build these optimisations, and you could reuse those libraries for that.
Haxl seems particularly suited for this (I think it's the same thing as Dataloader, both from facebook?) as it can efficiently batch and cache queries to multiple sources at the same time. It seems like a good layer between GraphQL and your N data sources.
Don't have tones of experience with GraphQL, but doesn't one write resolve methods himself? GraphQL just works with the data you've fetched, and as always it's up to the developer to know how to use the tool properly
The way in which Apollo + GraphQL + React work together is just awesome sauce. I love that you can simply declare your data dependencies with a GraphQL query string and Apollo just fetches, wires up and injects the data into your component. I had the pleasure of using this stack for a project once. The speed at which you can move once you have the stack in place and a full grasp of it is pretty mind blowing.
Do you know a good learning resource that got you there? Esp. Apollo and GraphQl. I feel a little overwhelmed, similar to how i felt when i encountered flux/redux the first time. I figured it was easier than i thought after a while, but the first step was hard.
I think this is really nicely done. It's probably a little over-engineered for a project of this size, but it shows the power of using all of these libraries together, and I'm sure it was a great learning experience.
Assuming that boilerplate is indeed necessary. Many comments point out exactly that: this very page shows that the boilerplate for a really fast page is just the opened and closed html tag.
The latency of clicking on the "comments" link is greater than actual HN (about 0.9s) for an example I tried, of which about half is spent in the POST network request, another half in JavaScript and layout/rendering/painting. For actual HN the whole process is well below 0.5s.
How did the web get so complicated. Just look at that diagram.
Separation of concerns went right out the window didn't it? I'd like to see Hacker News Clone Using 300 lines of server-side Python code that works on browsers without JavaScript
I don't know, it feels slower than the original, because when you click on comments, it first loads an empty page and the actual content loads 1-2 secs later, which looks very jarring.
I'm curious if we'll see a highly-opinionated framework arise that encapsulates all of the technologies used here under the umbrella of a single configurable library... or does that already exist?
For a fully integrated GraphQL option I looked into Gatsby.js, which started out as a static site generator. It is still this at it's core, but the move for GraphQL for everything seems like a good move: https://react-etc.net/entry/gatsby-is-a-static-site-generato...
Impressive first load w/ < 1 sec to first render! The 400kb app.js takes ~10 sec to load on my connection but the page is functional during that time which is nice.
Is this served from a CDN?
Great job putting together a robust example application.
As someone who is trying to learn modern web development, things like this make me feel overwhelmed. So many tools to learn and integrate! Why these tools over other ones? So many tools to choose from on every level of the stack!
What the hell is the "right way" to build a web app nowadays? That's what I want to learn. I've figured out how to put up a static page pretty well, but this is so different. Is this site what is best practice now? Or is it one of many choices?
Awesome job. This is not a knock on JavaScript in particular, but the amount of brain bandwidth and dependencies to launch a modern web app is getting a bit absurd. I'm turning into that old grumpy hacker who prefers the simple good ole' days of LAMP. :-)
It took me a long time to realize that "launching the app" is not really what these tools are for. You can create perfectly fine awesome web apps with just HTML/CSS/ES5.
But as it gets more complex, or you add collaborators, or you feel how much easier/faster certain things would be if you could operate at a different level of abstraction... the world does tend to pull you towards these tools. None of them are necessary though, they just trade a little ramp at the beginning for a lot of saved headaches later on. I still don't use have of this stuff, but I recall recently debugging an issue that was a couple of clicks into an SPA (in an interview no less) and I sure wished I had jumped through the hoops to get hot module reloading on that one! Likewise, I'm kind of ready for tests now, to make sure the rendered HTML is what I expect in various scenarios, rather than walk through the app like a paranoid moron after every change.
I think the way to stay sane in modern front-end development is to not overthink the tools - if your project is already using them, great, use them the right amount for your team. If you don't use, say, Redux, but your state management is causing lots of pain, hey, maybe it's time for Redux now. If you already write only beautiful ES5 that runs correctly an is easy to reason about, who needs Babel.
My favorite situation is when somebody else picked the tools and the build process and I don't have to give a crap about evaluating all the options. Knowing and caring about all of the pros and cons IS a lot of bandwidth!
The length of that is the same as doing it in any other language, it's just that the ways to do it aren't built into the language. You could eschew a lot of these things, but you'd just end up building them yourself.
Flow? Unfortunate that it's a separate thing but it gets types which will relieve most programmers.
Yarn? .NET has Nuget, Ruby has Gem, Go has dep, etc.
Passport? I sure as hell don't want to do authentication all on my own.
Jest? I mean, you don't have to test but everyone should.
without commenting on the dependencies specifically, and simply evaluating the result sent to the client:
- 1.1 MB of assets (pretty much all of it scripts).
- 300ms of JS execution (on fast desktop in Chrome 63)
- document.querySelectorAll("*").length = 742
in terms of perf, i'm sorry, but this is nothing to be proud of. even a fast vdom lib without SSR can do this in 10% the payload (or less) and 20% the scripting time (or less). why people are amazed at the speed of this impl is rather odd to me. i suppose it works well to demonstrate a large stack, but what you end up paying for that stack is plainly evident here.
I see these kind of posts often. What's a reasonable benchmark? What useful projects have only a handful of dependencies? You could say the same about a project in any language -- it's just more noticeable in the JavaScript ecosystem because best practices are still developing.
Definitely a lot of stuff going on. I take that as a sign of maturity rather than a problem. It's not really any different from C++ or Java or Python. You've got to learn all the frameworks and tooling as well. There's an equivalent for many of those packages in other languages.
You are partially right but you have to remember that each of these is addressing a generally large pain point. So by embracing a standard solution, you can abstract that point away to some degree. This tradeoff is called progress.
Just from looking at the list of libraries/frameworks/tech used, it seems like it was over-engineered. It was probably a fun project to work on, or maybe it was a good learning experience for the developer, but it seems like it's a wee bit too over-complicated. If someone wanted to release something like this to actual customers then I don't think you'd want to make it like that.
This is coming from someone who started out in the front-end with JS and SPAs. Why add in the responsiveness (and overhead) of a SPA when your users won't really notice the difference?
It's just one way to do things, used to demonstrate interesting new tech. It's meant as a helpful pointer similar to TodoMVC and not as a demonstration on how to build Hacker News "better".
I find you need different tools for different jobs.
I personally build 80% of my apps using Rails or Django, with vanilla coffeescript or javascript + a few libraries for graphs. That works for most business applications, and pretty much any information app (which IMO are usually the most valuable by dollars)
Rails is easier for most people to develop in (even if they are framiliar with JS or Django) within a week of starting. Usually after a 5 minute discussion I can show them why it's amazing and everyone switches over.
However, at a point, react and the like becomes easier when you want a prettier customer facing application, or when you want only part of the web page to update in real time (Personally, I can do this in Rails, but many find react easier). Point being, in those cases a framework that's literally built for that might be easier, and that's when it should be used.
Honestly, I think this whole JS craze is annoying, and I try to run no javascript. Because of that, I might be more sensitive than most and I design websites accordingly (only using JS as necessary)
I understand that HN is a popular web property, but these projects humor me because Hacker News is probably the worst example of something to be improved / accelerated by these Javascript technologies.
Best examples are pages with complex & persistent state (like a chat window open on Facebook as you navigate around).
Hacker News is a really nice example when learning a new technology. It's simple enough that you spend very little time explaining the domain but complex enough that you can demonstrate how to deal with some of the more thorny issues that the technology is designed to address.
https://www.howtographql.com/ is a very comprehensive tutorial demonstrating how to use GraphQL with different frontend (Vue, Ember, React) and backend (js, java, elixir, python, ruby, graphcool) technologies.
The end result is not as feature-complete as this entry though as the main focus is on learning the technologies.
This technology stack really feels like the future, awesome work! React & GraphQL go so well together, I love Relay's and Apollo's approaches to couple the data requirements of a component with the component itself! This is such an awesome workflow for working with an API.
Also, a shameless plug: If you want to get started with GraphQL, check out https://www.howtographql.com. There's a basic introduction to GraphQL and many hands-on tutorials, e.g. for React with Apollo or Relay (including videos).
I went down the path of learning Relay instead of Apollo. I'm regretting this decision. Relay is too restrictive, and the relay compiler is far too sensitive for most server side endpoints.
What put the nail in the coffin to Relay for me was watching a talk by the dev team where they said they don't want to create proper documentation because they feel like that's creating a contract with the community to support the API. It's somewhere on YouTube sorry I can't find it now.
Interesting. I've worked with Relay almost since its release, more recently on the modern branch, and haven't run into anything that's been too restrictive. You have an element of thinking about things in "Relay" terms (in the sense of how to structure them) but once you've figured that it then things just work (TM).
I'm curious about the second part of your statement, the compiler being too sensitive for your server side endpoints, can you elaborate on that a bit?
Sure. The first issue I came across was that the relay compiler expects a strict specification when mutating data, one that wasn't compatible with the server library I was using. Expect to write your own server side scaffolder to get around this.
The second big issue was that relay modern decoupled the networking logic, so having a series of chained queries is something you either have to handle yourself or write your own networking logic for. And because creating dynamic queries can't be done with relay it's a bit annoying that this logic isn't handled automatically for you.
My biggest gripe happens to be the lack of flexibility around dynamic queries. You can't define the data you want back at run time, which feels defeats a major point of GraphQL.
Apollo supports dynamic queries with higher-order components. It might be a case of the grass looks greener on the other side, but I felt as soon as you try to do something bigger than a todo app with Relay you start to feel the restrictions.
I've been working with a React+GraphQL+Apollo+PostgraphQL stack as of late it and has been absolute joy. Apollo is super robust and has excellent documentation and PostgraphQL _just worked_ out of the box against an existing Postgres database. Fetching non-trivial amounts of data and displaying them for the user has never been this easy (nor this fun).
Fantastic resource! The key point of this is it provides working example code to start a project from. Everything is MIT Licensed so that means you get to download it and just start hacking.
I would love to see some accompanying video tutorials!
Of course people are also free to post their own examples using their own favorite technologies. I would love for 'HN Clone' to become a benchmarking standard for comparing stacks. It seems like a bit too much work for such a purpose, but that's the point! :-)
I don't think authentication is actually working in the demo, by the way. Don't be tempted to actually log in to your own account! But I tried creating a new account and it failed.
I really like the Apollo stack, but if you're thinking of using it in production right now, you should wait a little!
There's a massive bug where if your graphql outputs 1 small error, the whole component does not get loaded.
Imagine a components that lists a bunch of items. If 19 returned correctly but 1 has an error, the "data" prop doesn't get passed, all you receive back is the error prop. It's pretty shitty for that.
The kind of query where it fails (and where it especially hurts) is queries like `getLatestNotifications`. It would be a lot of cruft added to prevent this kind of bugs.
So, yes, you know which one is bad, but it's not a query that's failing, it's something inside the result.
One of the things that interests me about client-side approaches like this is performance as perceived by the user.
Even though I suspect the times taken between real hacker news and the clone to load comments are similar, real HN feels a lot faster because the page only loads once the content is ready, rather than loading the page skeleton and then blinking the actual content when loaded.
Excellent. So we can submit links there as well? Actually here is another one which collects HN data but presented in another manner(It uses AngularJS for quick modelling). http://www.pxlet.com/
Question: I see that redux is advertised for state management but I don’t see any calls to dispatch or import statements for redux. Is redux actually used in this project?
I understand this has a certain cool factor but I have to wonder why we need all of this over simpler approaches like server side rendering and templates.
4) Few people care what your web app is built with. Your differentiating feature should be how it's used, not how its put together. In this case, it works exactly like the original, just with different internals. I don't see the point.
Given that this is meant to be boilerplate for building apps, it makes sense to install it with devDependencies. However, let's go along and install it with the `--production` flag. It doesn't get much rosier. Close to 1M SLOC.
- TypeScript, CoffeeScript, C/C++ headers/source files, basically any language other than JavaScript, are probably part of that modules source, and not the compiled JS output that is used.
The algorithm is pretty simple - points from votes or flags, but weighted by time, so that new stuff floats to the top for a while. You can use a remarkably simple version of it and it's still very effective.
I agree the community (with all its flaws) is far more important than any tech used.
Clicking on comments results in a weird flicker effect where the page appears empty for a split second before being filled with content.
Also, the comment collapse button doesn't seem to work.
Is there really any benefit to implement HN as a JS driven web application? IMO, Hacker News is simple enough such that implementing it as JS-Driven app actually makes it slower than the original.
The weird flicker/initial empty page (might) be because all the components are being mounted synchronously on page load. The new version of react introduced async rendering of new components: https://medium.com/behind-the-screens/dont-load-all-your-web...
Are you sure it is not due to the fetching of content form the server rather than the UI synchronicity? I'm quite certain it is just latency for the download. The components might be fetching that data during their mounting if the architecture was set up that way, but the actual mounting shouldn't be the cause; React's UI is much faster than latency caused by that alone.
It would be better if the components showed a waiting indicator during the download, if the components are responsible for the download. Alternately, the page itself could show that loader before mounting its sub components. Actually, there are many "alternately"s because frankly this site's approach isn't ideal.
Yes the benefit is that you can prefetch other page assets then client render new pages using only GraphQL queries. Download the whole site once and and runs more like an app afterwards.
Perhaps a good alternative to the flicker effect would be to put loading indicators in place of each UI component as it loads so you can see the structure.
HN is so simple that introducing such defects in the name of some theoretical abstract benefit shows a lack of care about the user experience and the product.
I see this attitude all the time and I find it unprofessional.
Hey, so, uh... all you need to clone HN is Postgres and some HTML/CSS and a smidgen of JavaScript. Good God, must we pull in the entire JavaScript ecosystem for a simple forum now?
It's called a 'reference architecture'. It's intended to be used as a teaching tool and learning tool for a stack that has tremendous benefits for applications more complex than HN. Since everyone is familiar with HN on HN, they can look at that code and understand how it fits in with the big picture without having to learn a new app PLUS a new architecture.
Good god look at all these over-engineered todo lists! All you need is a piece of paper and a pencil. Maybe a pen if you’re on the immutability bandwagon.
This is why the JS community is so fucked. They take what should be a simple website and needlessly complicate it by (poorly) reimplementing what web browsers can already do.
This really isn't why the JavaScript community is fucked. It's just a reference project for others to learn from. The spreading of information and ideas through code doesn't mean anyone's fucked. Why fucked? Are you reading into the fact that it's a young ecosystem with a turbulent flow at the moment? JS is just finding its groove. It's an always developing language that has the difficult job of being the language that runs on a fuck load of systems, within the enormities of the internet.
I'm on the fence for this one. While I completely agree that the JS community is as you say "fucked." There are so many fast moving framework redesigns that make for huge incompatibilities during these paradigm shifts. Super annoying community.
However, I love the advantage of exchanging the most minimal amount of data between a browser and the server for each additional page load (ie: JSON response).
Damn, those load times. This is on all my tests – fast internet or dialup, 2016 desktop or ancient android phone – significantly faster than the real HN.
This new site is around 1MB, regular HN is around 50KB. They're not even close. It's just that navigating to new pages is faster since it doesn't take a full page refresh.
They've implemented this as a SPA so all the pages are downloading at once not just the home page. Probably could be optimized with some lazy loading. Still, vanilla HN obviously has no framework so no need to download React/Apollo, etc. I'm gonna go out on a limb and assume that they're using this as a learning tool and a teaching tool. Just like no one needs a framework to implement a TO DO list, but that's the go to example app.
That would be the case when the bandwidth was the limit, but it’s not.
The limit is in the immense latency for opening a new connection.
Opening a network socket with TLS can take over 4 seconds on mobile, which is why ideally you’d even use an open websocket for communication instead of new GET requests. Navigating to a page in this version causes one or two network requests, in the real HN it takes far more just to check if the images have changed, the CSS has changed, etc.
Using websocket looks like overengineering because HTTP/2.0 allows to use a single connection for multiple parallel requests.
Using React to build a simple (student level) site like HN is overengineering too in my opinion. If you don't want to reload the page when navigating you can just reload HTML content with jQuery or fetch() without using client side rendering, without draining device battery and without loading 1 Mb of Javascript that will take much more space in RAM.
SPA approach should be used for interactive applications able to work in offline mode, able to provide rich experience on mobile devices etc.
Yeesh, thanks for the downvotes on an actually measured result.
I'm trying this in Firefox nightly (stylo enabled, webrender not).
Even on an old phone over throttled internet, that page is faster than real HN for page transitions, but the same happens on 100Mbps WiFi with my Nexus 5X, or on LAN with my desktop.
In all cases, that site is significantly faster than real HN in loading and rendering (I can see the real HN's icon slowly load on every page refresh — that page doesn't do that).
If you get different results, you're probably using Chrome, which is over-aggressively caching.
On a slow laptop on a reasonably fast connection (university WLAN) using FF 57 Beta the clone is slower. It is slightly faster than real HN at showing the orange bar and the tan content box (which I find quite jarring), the text then pops in noticeably later. Some pages also appear to first show error pages that then get replaced with content. (This even happens on repeat visits to a page, where this seems to cheat and take the comments from cache instead of checking for new ones)
While I don't want to belittle this accomplishment. The magic of Hacker News is the community that is on it. As a piece of technology NH isn't all that impressive.
This kind of feels a little over-engineered.
Edit: To downvoters, care to explain why I am wrong?
The project is extremely well done. It is impressive in its documentation and execution. But in the real world done, launched, and with traction is more important than being well engineered. The existing implementations of upvote style sites work just fine.
When Digg.com -- one of the first upvote style boards to go mainstream -- launched in 2004 dozens of copy cats came up. Few succeeded. Why? Community.
Also, we are moving away from the decentralized web, unfortunately. For instance, few people run their own Wordpress blog now. Most use Blogger or Medium. I don't see how projects like this fit in in that world.
Now, if rather than be a HN clone he added the ability for people to create and moderate their own boards. That would get my upvote.
It's fantastic to see people share their knowledge, and turn theory into practice. It's very easy to spout on about best practices and good ideas without demonstrating how they can be implemented in the real world. And this application does demonstrate some good ideas.
...But some bad ideas also. I think one problem we have in the frontend world is that developers of very small, very simple sites are pulling in solutions entirely unsuited to their scale of problems. They think they are being careful and investing in an insurance against rapid expansion, but what they're actually doing is overspending on the wrong investments, making things expensive that don't need to be expensive, and betting against agility. This seems like a mistake.
I encounter too many teams who are building web applications by bunding a cornucopia of technologies they don't understand and allow "best practice" examples on GitHub and Medium to do their decision-making for them. They disengage critically from their technology choices, which has the short term benefit of getting them moving faster, but the long term cost of never developing their research and evaluation skills.
This is not your fault, of course. People always want a silver bullet, a framework prefabricated and ready for drop-in domain code. And to be fair, most websites are very like each other. I just wish software developers chose their tech a little more critically.