Seems to use Flow instead of Typescript. I wish MS and FB just merged their projects already. Really unnecessary friction when their syntax is 90% the same AFAICT.
Yes. I wonder what's the future of Flow (even if it's still used and maintained). As I posted in another topic about TS:
> It's kind of crazy that Facebook has let TypeScript takes so much market share.
I've been using Flow for so long now. The techno is good, the integration with the code editors is good too, but Flow has a lot of small-but-really-annoying bugs (2235 issues on GitHub right now), the type definitions for third-party libs are meh (you often have to update/fix the definitions by yourself), the team seems extraordinarily unstaffed for such an important project, they don't share the roadmap at all, the priorities are clearly internal first (like improving the performances instead of fixing the bugs) and they don't seem to do much marketing or communication about it.
> I'd love to still recommend Flow, and as I said the techno is good and does the job, and they continue to release new versions, but I just don't see what benefits you would get by choosing Flow over TS at this point. Flow and TS do exactly the same job, with almost the same syntax (with different ways to transform it in JS tho), but TS has a bigger community and MS seems to put more efforts/resources into it. This has been true for 1 or 2 years now, so…
> I tried Flow and TypeScript for the first time a few months ago. Flow is absolutely abandoned and dying. TypeScript is taking over where Flow left off, and is far ahead of the game by now. I use TypeScript with create-react-app in my client work and it is invaluable and a wonderful experience, with no downsides as far as I can see.
To add to that, the only reason I can see to use Flow in 2018 instead of TypeScript is if you already have a large project that uses Flow and migrating to TypeScript might take a few days. But even then it's probably worth making the switch. I have noticed a much smoother experience and much better integration with TypeScript than with Flow. And there are many type errors that Flow never caught no matter how much I tried to configure it right. Existing code bases may have these too. Even more reason to switch sooner than later!
We are in the process of doing this on a moderately large product. We've found TypeScript itself has much less friction to write (quicker development), and has a lot less gotchas (probably because it restricts the language itself, unlike Flow), so porting the code has been a big task but a very positive one. We went with Flow originally because it would work with our untyped JS better, but in hindsight we should have bitten the bullet earlier and gone straight to TS.
We're still exporting Flow types generated from TS modules to be imported by older projects (which is... _okay_). I would recommend just choosing the better option from the outset. You'll save hours researching why Flow has a problem (and eventually just writing `// $FlowFixMe <https://github.com/some-flow-issue/666` and moving on). I don't see any compelling reason to start a Flow project these days.
To me that's a feature and not a bug. A good codebase would have --noImplicitAny enabled, which would catch this. Relying on usage of a function to know its types is fragile, the function should be the one declaring the types it uses so that callers can adapt to changes, not the other way around.
To build upon your comment, I really like the way Rust does things (they do it for coherence and major-minor versioning changes not breaking inference) - you declare types in functions (you have to) & structs, and everywhere else is optional (unless the type checker can't infer the type).
--noImplicitAny isn't quite equivalent though. What type inference provides is the ability to define the type of something somewhere and propagate that contract throughout a complex system such as a DI container.
The playground examples are just an oversimplication to illustrate the core difference between TS and Flow, but the screenshot gives a better idea of how it actually plays out in practice. In our case, we tie the type to the DI token and a plugin receives something that is guaranteed to be of that type.
This means someone migrating doesn't need to explicitly add types to their code, but can still get better gradual type coverage than `any` or the break-the-world `--noImplicitAny` flag.
I can totally see an argument for that if you need gradual type coverage, which I guess is needed on large code bases. I have only worked on smallish ones of a few thousand lines of code, and it took maybe an hour to type everything, although most of it was really fun and exciting :) but in cases where that's not realistic, I could see using Flow to help transition.
Yes, with some caveats. If you're `export`ing something as a package, you need to expose its types so that the consumer knows what its signature looks like. Once that is done, calling a function that was defined in another package will get type inference, as seen in the screenshot I posted.
"For example, Flow uses structural typing for objects and functions, but nominal typing for classes." This statement also applies to TS.
Flow does have nominally typed Opaque Type Aliases[1], which are essentially newtypes from what I've gathered. However, you can build similar zero-cost newtypes in TypeScript using union types, casting and "unique symbol"[2].
But Typescript is a language on its own. It claims to be a superset of JavaScript, but includes JS features that aren’t even certain to end up in JS (decorators). And their import system... Whereas Flow is proper JS. It’s just so... nice!
That's not true. TypeScript spec is just the latest JavaScript spec. Decorators are a JavaScript (TC39 stage 2) proposal, and TypeScript only uses them if you use --experimentalDecorators. And TypeScript uses the ECMAScript module system. Whatever other features you think TypeScript has that aren't "proper JS" probably are either already JS, or are late-stage TC39 proposals.
Q: `import React from "react";` breaks in Typescript
A: have you tried `import * as React from 'react';`
Me: Golly!
> Decorators are a JavaScript (TC39 stage 2) proposal, and TypeScript only uses them if you use --experimentalDecorators.
Oh, didn’t know about the flag; I thought it's already a standard language feature now, what with all the Angular apps relying on it. As for decorators being a TC39 proposal, I am aware of that, but they have remained a proposal for quite a long time now, haven’t they (about 4 years now?), and I remember babel renaming the transformer for decorators to legacy-decorators-something, saying that the very semantics of decorators in JS is still debatable (some thought of them simply in terms of functions accepting a function and returning another function; some in more angular sense of injecting some properties into methods). I hope I am not completely misrepresenting the rationale here.
Pretty sure `import * as React from 'react';` is the correct EcmaScript syntax because React does not expose an ES module compatible `default` export. You're just used to it working anyway thanks to babel accepting CommonJS modules, but TypeScript is the one that actually follows the standard here.
> A: have you tried `import * as React from 'react';`
> Me: Golly!
You're fundamentally misrepresenting that whole issue, and if you'd read just two or three comments down you'd see that there is a better fix, and part of the problem is a behaviour that's not even related to the ECMAScript standard.
And I think you're right about decorators. They are an iffy concept in the context of JS semantics, and I personally avoid using them and try to avoid libraries that use them. That's probably why TS has them disabled by default and calls them "experimental". I like the concept of decorators in and of themselves, but they don't fit cleanly into the JS world, and I am looking forward to seeing how the JS/TS world innovates alternative "cleaner" solutions that fit better into the existing model!
Similar, but harder to use safely and correctly in class methods using the new class syntax. There's more surface area for typos and other mismatch errors since you have to type the method names twice. See: https://mobx.js.org/best/decorators.html
I'll use anything that the author of this article (Leo Horie) releases or is attached too. He is one of the greatest minds in JavaScript of this generation. After Angular, Vue and React I stumbled upon his framework Mithril.js and it was like I stumbled across the shroud of fucking turin, that framework is true poetry.
I certainly don't speak for him, but I think it's a combination of things. First, Fusion is a group project and not his personal project. Second, React has a virtuous cycle where because there are so many jobs calling for React, more people learn it. People also seek out jobs that involve React because it will help their resumes. Demand keeps feeding supply and vice versa.
The Fusion plugin system is rather powerful and enables colocation of related/coupled code that would normally be spread across several places. For example, the Styletron plugin for Fusion (an integration for a CSS-in-JS implementation) will do several things:
* Wrap the application component tree in a React context provider component (which provides an instance that components will render styles into)
* On the server, extract rendered styles after SSR from provided instance and add necessary markup into the server-rendered page
* On the client, hydrate the provided instance from the server-rendered styles
* On the server and in development, set up a route handler that serves two assets, a web worker implementation and associated WebAssembly binary [1]
* On the client and in development, fetch and execute the web worker. Normally, this would be a somewhat difficult integration because of CSP-related issues with web workers, but because the plugin sets up its own route handlers, the requests will be same-origin, sidestepping most CSP issues that normally arise. Additionally, Fusion plugins can also modify response headers for requests, so if needed, CSP headers could also be set appropriately.
All the code to do this actually is related to a single concern, namely styling, but in a universal web app, such things typically requires the involvement of many different parts of the application lifecycle and both server and client code. Fusion plugins allow you to slice up the independent parts of your application logic in this fashion, somewhat analogous to how colocating HTML/CSS/JS for individual components in CSS-in-JSX is often much nicer than splitting apart component implementations across separate HTML/CSS/JS files.
[1]: This web worker generates debug CSS at runtime that maps rendered CSS to the source styled component definitions in JS using source maps, making it easier to reverse map the rendered CSS to the source CSS-in-JS when inspecting the DOM with the styles pane. https://github.com/rtsao/css-to-js-sourcemap
I've been working with the Fusion.js team for the last year now, and I think it's evolved into a really interesting and modern web framework. Internally we've started rolling this framework to a few dozen web applications, and soon we will have hundreds of web apps running it. I think it's a great choice to use as a base for high-performance and complex web applications.
Correct about Fusion.js being based on koa. We find that this fits really nicely within the plugin system. The diagram is complex, but we inject the render phase into the middleware stack. Everything before `await next()` is pre-render, and everything after `await next()` is post-render. It makes it very easy to reason about the lifecycle of a plugin, as everything is in one place.
Koa is an improvement over Express, but I think it would neat if the JavaScript ecosystem adopted an even more functional middleware style, like Python's WSGI, Ruby's Rack, Clojure's Ring, etc.
Conceptually it's simple: middleware is a function that typically accepts a downstream "app", and returns a new "app" which accepts a "request" and returns a "response":
const middleware = (app) =>
async (request) => {
// do stuff with request
const response = await app(request);
// do stuff with response
return response;
};
I worked on this idea way back in ~2009 (creatively called "JSGI" for the interface spec and "Jack" for an implementation) but promises weren't really a thing in JS, and async/await definitely wasn't a thing, so it was awkward.
More recently Michael Jackson had a project called Mach which was a similar idea, but it's no longer active either: https://github.com/mjackson/mach
As an aside, one of my favorite aspects of this style is having symmetric interfaces for HTTP servers and clients. You could do neat things like use a cache middleware for both a client and server, or write a simple HTTP proxy in a couple lines.
Anyway, with the addition of promises, async/await, and async iterators to JS I'm starting to dust off these old ideas. prototype here https://github.com/tlrobinson/twosixonesix
It was fun but hard to really make a convincing upgrade to Koa once you consider the rest of the ecosystem. For example, since Koa exposes Node's req/res, then you can still use existing Node/Express middleware.
I thought your name sounded familiar! I played around with Mithril.js for a bit before settling on React.js and was really impressed with how you took the simplicity of JS and used it to your advantage with Mithril! Sad to say that I couldn't replicate that cleverness in my own code because I could not really wrap my head around the reasoning you used to come to code you did in your examples of using Mithril, but that's just more evidence that you're a smart guy, so thanks for making Mithril which helped widen my view of what could be done with JavaScript frameworks! Also is there any reason you chose Flow over TypeScript?
Funny you say that, as for me it was the exact opposite. I had tried multiple frameworks before discovering Mithril, and it was the first that clicked with me. I think what appealed to me the most was the lack of magic, its minimalism, and flexibility. Whereas with other frameworks, I felt I was always encouraged to `npm install <another-package-or-plugin>`, Mithril made me dig deeper and understand the nuances of JavaScript.
All of the people in that page are full time employees and part of the Web Platform team at Uber. The only minor technicality is that Swojit is interning with us.
The main difference is Fusion.js has more support for backend things. For example, we provide a GraphQL plugin, and plugins such as I18N are bundle-splitting-aware out of the box.
The plugin system is universal (meaning you can isolate concerns by what the library is responsible for, rather than whether the code is server code, browser code, React provider boilerplate, HTML hydration code, etc).
This plugin architecture has already proved to be very valuable on more than a few occasions. One example is a service worker implementation we've been working on. It needs a middleware, browser-side registration code, etc, but all of this complexity is encapsulated in a plugin that can be added to any app with one line of code.
> Does it only support react?
Many plugins have a `-react` version that allow them to auto-integrate with React, but the core itself is view library agnostic.
The idea behind this looks awesome - if I understand correctly, this is to encourage people to program more to interfaces provided by plugins so that implementations can be swapped more easily. This does not require a type system either to use, which lowers the barrier.
The one thing I think should change though is decoupling from requiring Node from the runtime. I think the broader JS ecosystem could benefit from some of the ideas this library seems to promote.
Fusion.js is a javascript framework meant to be run on Node. There's nothing stopping you from calling out to other backends and asp.net if you stand up a Fusion.js frontend Node server.
It’s a library so either they need to be convinced to provide definitions or the community needs to be convinced to build them. But at the point that they are built it’s no different than anything else in your stack.
The dependency injection (DI) system is indeed inspired by Angular's. It has some major differences though:
- Fusion.js DI is token-based rather than string-based, so no naming collisions
- We support statically typing injectables (similar to Angular 2+)
- plugins are the only injectable entity type (whereas Angular is conceptually complex: e.g. modules, services, providers, factories, etc)
- plugins are isomorphic, whereas AngularJS injectables are not
I sorta already expected that people might get mixed feelings when seeing a DI system, but we spent a lot of time designing/tinkering with the plugin architecture to make it truly useful for managing complex library integrations and complex backends. I'd be happy to answer any questions about how we've been using it.
> lazy loading can happen in any component rather than only at "page" level
You can use dynamic imports anywhere in Next using `react-loadable`.
> more things are provided by the team via plugins
Next relies on its thorough `examples` dir for integrations. But it means a lot of manual coding.
I had a crack at a plugin system in the past (i.e plugins for adding features like apollo, redux, etc. AOT like stuff). I found that it obfuscated the code too much. Makes it hard to trace what is happening and make adjustments. The closer the code is to the "Getting Started" examples of Redux, I18n, Apollo, the easier it is to tweak and understand.
E.g. Sometimes it might be clearer to just wrap a Provider manually around the app root, than expose a plugin hook. Because you can see the React tree, whereas with plugins everything is dynamic so you must rely on logging.
I think devs are naturally drawn to DRYness and dividing code up by feature, which is what a plugin system offers, but there are big tradeoffs. I'm interested to dive into Fusion to learn more about their approach though.
> - better support for maintaining server-side complexity (e.g. by using Koa + DI system to make testing/mocking easier)
Great to see you adopting Koa! I use it myself, and feared that the community had stagnated, but now with Fusion, it may be reinvigorated.
> DI system to make testing/mocking easier
Very cautious of DI systems. Same reasons as plugin systems: its hard to see what is going on. And sometimes manually wiring up all the dependencies is actually not much code at all, and you can manage cyclic dependencies and ordering easier.
> You can use dynamic imports anywhere in Next using `react-loadable`
I joined the team after the code splitting part was done, so I don't know off the top of my head whether this library was considered, but it does look interesting. From a glance, it seems to suffer from the issue of sprinkled configuration, but it might be something that we could potentially use to replace our current implementation. Does it support HOC-level async data fetching?
> Very cautious of DI systems. Same reasons as plugin systems: its hard to see what is going on
Interestingly we had a completely opposite experience maintaining our old closed-source framework. Everything was built on top of express and it was really hard to reason about where code for any given thing was. For example, trying to debug some I18N thing meant digging through at least 3 files filled with unrelated concerns in completely unrelated packages.
FWIW, we spent a lot of time designing and redesigning the plugin system (like months). What is there today looks nothing like our first approach. I think we arrived at a pretty good solution, which is centered around colocating related concerns into one place.
> sometimes manually wiring up all the dependencies is actually not much code at all
Yes, wiring things up manually is relatively easy. The challenge we kept running into was taking things out (e.g. no-op-ing production-hitting tracing/metrics code in tests)
Having used Fusion a bit now (I'm an engineer at Uber, but not on the Web Platform team that produced Fusion), I'm really enjoying the DI system. Agreed other DI mechanisms (e.g.: Angular) are too magical; Fusion uses a token system which is very easy to reason about.
Having used other DI systems, I think the parts that most confused me were that the injector would usually be hidden away (meaning you couldn't see _what_ was being injected), and also the conflation with various initialization patterns (factories, providers, singletons).
In Fusion.js, the injectable registration is done in the entry file, and initialization patterns are the concern of the service API. I think these design choices simplify things a lot.