If you're interesting in a reactive template library that doesn't require a compiler for non-standard JavaScript syntax, check out a library I've been working on for a little while now, lit-html: https://github.com/Polymer/lit-html
> Nearly identical. lit-html uses `<template>`s and cloning, so that it doesn't have to do any expensive VDOM diffs.
So… it's not doing reconciliations and is just replacing the entire tree on every render, losing things like cursor position and forcing the browser to re-render and re-layout the entire thing?
I really want a way to get a generated voice recognition transcript for all that content people really think needs to be in video form. 2x playback is still 14:45 too long in this instance.
They both go into how lit-html works and how it doesn't just throw away and recreate the DOM on re-renders, though I admit the docs could still be better.
2. be careful about wording, react does effective DOM update, but effective does not always mean fast - what I mean is that compiled templates are usually much faster than doing vdom diffing along with all of that destructuring and other unoptimizable (prepack might help, but it's far from being usable) stuff
Statements without any proof is more valuable than "benchmarks with zero information value"?
> what I mean is that compiled templates are usually much faster than doing vdom diffing
If they are usually much faster, why it is so hard to make them perform faster than vdom libraries in this benchmark? Or maybe any other benchmark, please just show me something that I can measure.
Template strings loses all the actual benefits of JSX, in that you can have your editor parse it for validity and type hinting in the same way as if it was all createClass class (but more convenient).
It ultimately comes down to Amdahl's law: doing something in a browser requires updating the DOM. Since you always have to do the DOM processing, the only way adding the extra virtual DOM work will be a net win is if it makes it easier to avoid unnecessary updates or allows something like ordering updates to avoid triggering repeated layouts / reflows[1].
Since updating the DOM is relatively fast in modern browsers it's not particularly hard to find cases where the work the virtual DOM has to do cancels out any savings.
You have to do the diff, which is a thing. lit-html doesn't do the diff because it knows from the template what parts actually change. No need to diff the parts that never change.
Then you need to send lit-html over to the client also. This would be a great alternative if one doesn't already use Babel, but what's the selling point otherwise? Is there an easy way to combine lit-html's render with the HyperApp one?
IMO, the problem of composing renderers is solved at the component level. Each component should be able to freely choose its rendering library and control its own encapsulated DOM without interfering with other components, or leaking it's choice of template library to the outside.
Web Components and Shadow DOM make this possible. You can mix and match components that use lit-html, Polymer, Preact, etc., and mostly likely HyperApp.
I think within your own components it makes sense to stick with a single library setup for this reason, but atleast if you want to use a third-party component you don't have to rule out components that use different rendering machinery. You just have to weigh that additional download penalty.
That's only because Polymer 1 and 2 installed themselves on the `window.Polymer` global, so they would conflict, and Polymer 1 used the old Web Components v0 spec. Polymer 3.0 doesn't write to any globals and can intermix with LitElement, other Web Components libraries, and future Polymer 4.0 and beyond.
That's a great step forward :) And how about using two Web Components that depend on a third component? For example, what if I'm using two third-party components that both depend on LitElement, but on different versions of LitElement? Will those still conflict?
We use Hyperapp to power our most complex UIs (decision trees, onboarding sequences, etc.). We didn't need any of the existing React ecosystem for that, and by removing React, we saw several benefits:
1. Smaller library for faster page loads
2. Simpler API, docs and library made it easy to get started and understand what's happening behind the scenes as well as debug any issues we faced
3. We aren't supporting a project run by Facebook, which I personally view as a good thing given Facebook's many previous issues. Facebook's patent clause (while it no longer exists) was a factor in our original decision.
I would choose Hyperapp again for my company, and I use it for personal projects as well.
It came down to preference. I find HyperApp's codebase simpler to read/reason about (it's a pretty straightforward single file), and I prefer its API over React/Preact's.
Because Preact and Hyperapp are solving slightly different problems.
Hyperapp is Elm-like state management (and soon effects and subscriptions in 2.0) on top of an ultra-lightweight virtual DOM diff engine.
Preact is a React 15 clone at best. Check out also https://github.com/NervJS/nerv for another React clone that is closer to React 16 (but still no fiber).
Hyperapp is not optimized to be the fastest framework at the expense of worse developer experience. Having said that, we're definitely working on improving our runtime performance (see https://github.com/hyperapp/hyperapp/issues/499). So much to do!
I want to point out that while these benchmarks are very useful to detect underlying, potentially serious runtime and memory performance issues in your algorithm/framework, the implicit idea that even the slowest framework according to this list (e.g. choo) is a poor choice or inadequate for frontend development is ridiculous (the js-framework-bench creates > 80,000 nodes).
Please don't do that to your users, regardless of the framework you are using. Even the most complex user interface will have < 10,000 nodes. Tables/grids may get you there faster, however.
Still, in the case of Hyperapp we're talking about 100 to 200 milliseconds slower in the worst test (i.e., partial update) for a worst-case scenario.
This is an old benchmark, we're in the same ballpark as React now. Hyperapp is also not just a virtual DOM, but also a state management "all-in-one" kind of thing.
I don't even get why it is important that it's 1kb. Give me a library with great API and easy to use. Nobody cares if library is 1kb or 100kb (minified).
Don't assume that everyone is happy with whatever level of mediocrity _you_ think is acceptable. It's because of developers prioritizing developer experience and other baggage over user experience that I despise my mobile web surfing experience.
Plenty of people do care about frontend performance (as evidenced by the plethora of efforts ranging from small alt vdom libs by solo devs to large corporate efforts like AMP or m.uber.com[1])
A large library is perfectly fine if it provides enough functionality to pull its own weight. But these days just about every site on the web has dozens if not hundreds of bloated libraries for the stupidest things and large bundle sizes due to that have become a pretty good indication of wrong priorities from the developer's part.
And I, as a user, am left waiting several seconds waiting for pages larger than the original Doom executable to load. It's gotten so bad that my wife had to be selective of when to use her phone because browsing normally without wifi from starbucks etc would get her over the plan limit by mid-month. I mean, how much browsing are you really supposed to be able to do when every page is several MBs of JS alone, and you have a 300MB/mo plan to work with? Not every country has cheap/good mobile plans.
Loading time doesn't always result in complaints (that depends on the user's expectations) but has a significant effect on likelihood of repeat visits, length of visits, and amount of interaction.
Some of those results refer to HyperTerminal, the Electron-based terminal app. Other results refer to Hyperapp-like libraries (not Hyperapp). The most recent submission related to Hyperapp was 8 months ago and Hyperapp has come a long way since then.
Here is another angle to this: go into React type of submissions and ask them to stop spamming HN with that?
Show HN: 1 KB JavaScript framework for building front-end applications
216 points jbucaran 8 months ago 42 comments (https://github.com//hyperapp/hyperapp)
Show HN: 1kb JavaScript library for building front end applications
187 points jbucaran a year ago 40 comments (https://github.com/hyperapp/hyperapp)
You can define components as simple functions and avoid passing things down through attributes. If you use TypeScript with it, the compiler will always make sure you pass the right arguments to the component, so it's very easy to compose your UI.
I also avoid keeping state on components. I know this is controversial and it requires a bit of overheard. However, the simplicity offered by having all state in a single object is worth it for me.
True, but by passing the same object as an attribute to a pair of Mithril components, I can have them share state. (Can be restricted to the scope of a parent component if desired.) Is there an additional requirement for compound components?
There isn't a built-in way to detect type of component. It would be useful when you want a parent component to encapsulate certain logic/behavior but not hard-code the presentation. I know there's name() but it only works if my component is a function. Otherwise I think I need to assign some special identifier, which sucks.
I love hyperapp! As someone who thinks Elm’s architecture is ideal for building webapps, it’s great to be able to almost replicate it in JS. It’s simple enough to get started quickly but still robust enough to build actual apps and not just toys.
Hyperapp is Elm for the rest of us. I wouldn't compare it to React as they are solving slightly different problems. Hyperapp's state management is built-into the framework. In this way, Hyperapp is a tad more "high-level" (abstract) than React.
> I'm implying that Elm, while great, is not as user-friendly, intuitive or easy to use as Hyperapp
First, Elm is designed to be user friendly. The designers of the language have put effort into ensuring the error messages are understandable. The messages even include information on how to potentially resolve them! Elm's error messages are much more friendly than most Javascript error messages I've seen.
Second, there is nothing intuitive about programming. If there were we wouldn't need years of training to gain proficiency: you could simply give a human being a computer and they would be able to do it in the absence of conscious reasoning. Despite our best efforts and research there has yet to emerge a language that is intuitive.
> designed with extreme devotion to details, minimalism, and simplicity
I believe Elm is also designed with these concepts in mind.
It sounds like it's not the kind of simplicity you're used to and that may be why Hyperapp is _for you_.
I interpreted the phrase, for the rest of us, to be a false equivalence between all programmers that do not use Elm and programmers with the same opinions and needs as you. I think I understand your point better now but it would have been clearer if you had left out that phrase and enumerated why it's better for people who need X instead.
React is a ghetto, basicly + vdom has its own size.
IMO thing like hyperhtml or lit-html are the way forward - especially with new templating proposals.
Not to mention react is not interoperable really with other solutions - I'd expect webcomponents win long term, since they are built into clients.
Is there any particular reason why? I’m curious because it’s my general understanding that creating a document fragment and attaching your vnodes to that and then pushing to the dom is more efficient especially for diffing
I have heard that as long as elements aren't actually in the DOM, they are as fast as fragments, so you can use document.createElement instead of fragments. No idea if that's true or not.
Hyperapp is based on virtual DOM and yes, I guess you could say it's inspired by Hyperscript in the same way as all virtual DOM based libraries or frameworks.
I'd disagree. React is very much a "Shakespeare" of javascript frameworks. It has solved UI dev by making even the most complex types of UIs predictably programmable. (Note: not including redux in this, which no developer who owns their time would use)
>>It has solved UI dev by making even the most complex types of UIs predictably programmable.
What's that they say about each and every framework. IMHO there is no Shakespeare and all JS frameworks will eventually die once web assembly is in place.
Side question: Isn't it grossly inefficient that in Redux, you have reducers that return an entirely new state object? Wouldn't it be better to return some kind of data that represents just the diff you intend to make, like {op: INCREMENT, arg: 1, key: "Foo"}?
It's doubtful that state changes are the bottleneck for application performance, for most applications. Rendering and managing the lifecycle of your components usually yields much more tangible perf. gains.
It depends. If you are making a shallow copy every time via `Object.assign` or the `{...obj}` syntax, then yes, it is rather inefficient. But a) for many (most?) apps it's Good Enough™, and b) you can always use a specialized library like Immutable.js to greatly reduce the overhead.
I'm not sure about what would be the benefit of the scheme you are proposing. Are you proposing that the diff then gets applied directly to the state (`state.foo += 1`)? If so, you would remove a powerful assumption that Redux gives you: that a state object will never change from underneath you after being returned from the store. If, instead, you would make a shallow copy of every value affected by the diff, then you haven't gained anything over Redux, and in fact have made the API significantly more complicated for no benefit (aside from maybe slightly less boilerplate for deep paths).
Oh right, immutable data structures are good for exactly this situation. I've been in mutation land for a while now. :)
However, now I'm newly confused: Yes, I'm proposing that the diff then get applied directly to the state by the Redux infrastructure, and I don't understand why that would break any important assumptions provided by Redux. Is there an example you could give of how this might create a problem?
You would have to be much more disciplined with your approach. With Redux, if I store a reference to any part of the state, I am guaranteed that the value will never be changed by Redux itself (so, unless I manually mutate it, it is guaranteed to be frozen [0]). Because of this, I can do things like safely store a value off of the Redux state inside a component, and when the state changes, I can compare the new value against my stored value to see if it has changed. If you directly mutate state, then the reference would update in-place, so there is no way to compare changes locally without first making a copy.
It also goes the other way: I can't accidentally change the state by mutating the object I get back. With your scheme, any "accidental" mutation on any part of the state will actually change the state. This opens up a whole world of bugs, because any time you want to store part of the state locally (inside a component, for instance), you have to remember to make a copy if you want to guarantee that no unwanted side effects occur. (Plus, getting into the convention of "everything is immutable" means you can generally be much more confident about passing objects around without fear of them being mutated unexpectedly.)
Edit: Now that I think about it, what you are proposing sounds pretty similar to MobX.[1] You should check it out if you haven't already. I personally strongly prefer Redux for the reasons I mentioned, but it's a pretty solid library either way.
[0]: In development, it's very easy to enforce this with Redux by simply calling `Object.freeze` on the state in the root reducer, completely preventing the state object from being mutated.
That is exactly what a Redux action returns. It's a representation of a diff against the current state. At some point though, that diff has to be applied in some form, either atomically (as currently) or through mutation. Actions trigger the reducer that applies the diff to the store.
Basically, I'm not certain exactly what you're getting at.
If just the diffs were returned, you'd need to constantly reapply them to recreate the latest state. By returning the entire state the previous reference can be discarded.
Well, I'm thinking that Redux would apply the diffs to the state destructively, so I'm not sure why we would need to "constantly" reapply them in order to recreate the latest state... we would simply have the latest state on-hand already. But if we're in a context where a lot of rewinding and fast-forwarding of state is happening for some reason, or where these state diffs can't easily be reversed/inverted, then I can see why this would be inefficient.
Sorry, no, it's my honest opinion. I am the kind who is more interested in ideas and patterns than in libraries or frameworks, so I had a look, but this looks like the typical work of a design-disabled programmer. I am not saying that is garbage, but I don't like neither the execution or API design.
Redux can be complicated, but Redux is way simpler and easier to reason about than the Meiosis patterns.
> Think Redux, MobX, Cerebral, React Context, etc. but without having library imports everywhere and being tied to a framework API. Instead, just plain functions and objects.
Hmm, but reducers are supposed to return references to new state objects, right? So clearly something more substantial than a pointer has to be created/duplicated.
Wow, there's an overwhelming total of 2 comments in the source file. The first indicates a constant value. The second lacks all context. Looks like typical JavaScript code.
This comment violates both the site guidelines ("Please don't post shallow dismissals, especially of other people's work.") and the Show HN guidelines: see "In Comments" in
https://news.ycombinator.com/showhn.html.
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
Hi. I'm the author of Mithril (one of the virtual dom frameworks mentioned elsewhere in this thread), so I think can answer that.
To be honest, there's nothing inherently "wrong" with it. There are various techniques to implement templating engines and they all have pros and cons.
Lately, VDOM performance in micro benchmarks has sort of plateaued, and recently non-vdom systems like Svelte (an AOT compilation system) and Surplus (a KVO system) have been making some splash as potential candidates to surpass vdom performance. One could argue that it would be "wrong" or "a waste of time" to try to one-up template performance by attempting to make a new vdom implementation because existing ones are pretty much as optimized as they can be. Since there hasn't been nearly as much effort put into alternative algorithms, it probably would be more fruitful to explore a non-vdom approach instead.
Do note though that I'm talking about R&D sort of stuff above. For people building actual apps, vdom performance is generally good enough for a vast majority of real-world use cases (evidenced by React's popularity) and one of its appeals is that it lends itself to being manually optimizable it if you do end up with a ridiculously ginormous DOM.
> Since there hasn't been nearly as much effort put into alternative algorithms
Angular2+ team is actually have done an awesome job at experimenting in this problem space. And they've moved away from generating code that is similar to what Svelte does long time ago.
People really enjoy pulling this up every time a JS framework is talked about, and it's really tiring.
I say this as someone whose entire job is writing "vanilla JS" without any sort of framework!
People do have a tendency to overdo tooling and lean too heavily on frameworks to get things done, but if you're building something without one you end up creating a lot of abstractions and boilerplate yourself to do anything relatively advanced and end up with a micro-framework of sorts at the end of the day, not unlike what was posted here! There's a lot of value in taking something small like this and building off of it.
Just linking to that vanilla JS site is snarky and unproductive.
Preach! we on the vanilla JS train can speak of it's performance, easy to deploy. <script type='module'>. <template> and template literals can your app scoring 90s on lighthouse, as if server rendered.
Hyperapp and Web Components is a great combo, especially because they can help with the problem of maintaining local component state, not a feature of Hyperapp.
Where JSX would look like this:
The lit-html would be: Nearly identical. lit-html uses `<template>`s and cloning, so that it doesn't have to do any expensive VDOM diffs.