It was cool seeing a demo at the end made using Choo[1]. Choo is my favorite in the current sea of react clones. I like it because its tiny (4k vs react's ~200k). Choo supports server-side rendering. And its made out of smaller modules which are all independently useful (DOM generator, diff engine and router).
Instead of using JSX files, choo uses ES6 tagged template literals. As a result the code doesn't need a compilation step at all. But you can still actually compile the templates if you want for better performance + a smaller JS bundle in production.
Ugh, that is way harder to read than html`<h1>oh hai ${name}</h1>`. They both work without compilation, and they can both compile to the same code anyway. Maybe you can get used to hyper, but ES6 tagged template literals are much easier on the eyes.
for some people, but even then your editor can understand that that is a function, can tell you when you've mistyped it, can autocomplete it for you, and can give you hints on what arguments it takes (since it has TypeScript definitions).
And as a bonus for me, I also find the former easier to read than the latter, but that might just be getting used to it :)
Overall the slowdown is 3.3x, but in memory usage it places third and in startup time it's in the top 10, and both of these are only about 1.1x slower than the fastest. I wonder how many cases would give a better UX to have the smaller, faster to load library.
It's not better UX for the end-user if the app is slower. It's not acceptable to make your life easier as a developer in trade for making a worse app. If the developer ergonomics lead to more features then it may be a good trade-off, but there are too many cases where people will trade performance to make their life easier.
0.1ms reaction time and 10ms reaction time would be instant for a human, so you definitely can make your life easier with reasonable performance trade-offs.
Under 16ms is one frame so anything less than that would be equivalent (modulo battery life), but that example is a factor of 100 slower. In reality this would look more like 0.1ms vs 0.4ms, or 1ms vs 4ms.
"slower" by what metric? It seems like you're ignoring the user's bandwidth, latency, and memory usage. We're talking about a library that is a factor of 100 smaller. That's 100x less code to transfer, parse, store, and optimize.
My claim is that for the user's benefit it may be better to choose a library that sacrifices some benchmark performance metrics like "Time to update the text of every 10th row for 10000 rows with 5 warmup iterations." in favor of startup time and memory. These are the exact same tradeoffs that Seth Thompson mentioned in the video, they can't just be dismissed wholesale.
My question is: how many real apps should make this tradeoff for the smaller library?
I would guess "many more than make it now." Developers are wooed by the columns with a lot of pretty green (myself included) and end up making their app worse because they're optimizing for the wrong thing.
This is where naming is key: specifically, web applications vs. web sites. Many confuse the two, but keeping them distinct is helpful with questions like this.
A web site is normally a public-facing collection of HTML and JS where the JS is primarily decorative and the site is primarily page-oriented. Navigation is done via physical pages using the normal browser mechanisms such as links.
A web application is normally a software service that more resembles a desktop application due to its "load once and let the JS take control" architecture. Navigation is normally not done via physical pages, but rather via hash navigation that navigates within areas of the application, as opposed to physical pages. And, you'll see a lot more application-oriented UI techniques like modal dialogs and more elaborate controls (treeviews, listviews, etc).
There are, of course, applications that straddle the line between the two, but it's still helpful to make these distinctions because:
Web sites should optimize for load time first. Otherwise, you're going to have visitors bounce.
Web applications are different, and while they need to keep the load time down, they will most often be already in the browser cache. So, the performance shifts towards focusing more on the actual run-time performance of the application, and that is where issues with raw DOM manipulation may become problematic without something like a virtual DOM or some sort of property caching.
Perhaps, but it does certainly help in determining what kind of performance one is seeking, because you often need to trade startup time for run time when it comes to DOM manipulation. You either have a large framework that takes longer to load, but keeps you away from direct DOM manipulation (for the most part), or you don't and suffer the performance effects of constantly reading/writing DOM property values that trigger layouts/repaints.
The time from you enter the URL to you have landed in a usable state where you can use the site. For something you keep open all day, like email, general performance matters. But for things you use once or twice for 5 minutes, startup time matters a great deal more compared to the former.
Demo showing how to debug node code using Chrome DevTools at 26:18[1]. Very cool if you don't already have a full blown IDE with debugging built in. Or even if you just want to inspect some random node library/code on the fly.
I wish node also has a full-featured server-side debugger that worked in the terminal.
When I'm writing server-side code and popping `debugger;` into my unit test, I really don't want to go to open up a chrone window and use my mouse to fiddle with the sizes of the panes to make debugging in chrome work. It looks like this protocol could eliminate the problem of waiting for the connection to come through (it drops a lot of the time when you do it within unit tests), which is great.
If I wanted to get started adding this to node.js, does anyone have any suggestions for where I would start? Presumably by first getting a solid mental model of V8's internals, but has anyone else worked on debuggers that could give me some pointers?
Have you by chance found how to do that with babel-node?
It seems that after introducing a built-in chrome protocol debugger node-inspector was abandoned and now I can't figure out how to use the built-in debugger with babel-node.
I couldn't but notice that you are using my library Picnic CSS with the demo Github Repo Formatter (starting 32:50). Just wanted to say thank you to Seth Thomson and Google, it is a honor!
Octane benchmark retired because it benchmarks peak performance and doesn't account for other factors (e.g. startup time, memory usage). Speedometer2 coming to browserbench.org soon, benchmarks a much wider range of real-world frameworks.
Instead of using JSX files, choo uses ES6 tagged template literals. As a result the code doesn't need a compilation step at all. But you can still actually compile the templates if you want for better performance + a smaller JS bundle in production.
Its great stuff. I'm a huge fan.
[1] https://github.com/yoshuawuyts/choo