I really like this architecture - it's clearly based on lessons learned from the same types of pains that I myself encountered w/ non-trivial jQuery, and the unidirectional data flow makes it a lot easier to reason about the code. It's very similar to what I'm doing with my own micro mvc framework Mithril ( http://lhorie.github.io/mithril ).
One thing that raise my eyebrows about this article though is how says that Flux eschews MVC, and then goes on to say that it has stores that are "somewhat similar to a model", and "the dispatcher exposes a method that allows a view to trigger a dispatch to the stores", which, as far as classic MVC goes, is the exact definition of a controller. What they call controller-view is, imho, just an implementation detail within the view layer: in classic MVC, views were responsible for subscribing to model observables, or, as the article puts it, it "listens for events that are broadcast by the stores that it depends on".
When talking about this architecture with the Mithril community, I find that referring back to the MVC pattern makes it even easier to reason about the flow of data: it's basically M -> V -> C -> M ...
It's unfortunate that the general understanding of the MVC pattern became misunderstood over the course of many frameworks. The whole point of design patterns should be that we could just name them for people to "get" it, rather than having to reinvent new ways of naming things and having everyone relearn the nomenclature.
Thanks for Mithril - we're experimenting with it vs react.js for production use. There's a lot of very smart design decisions there expressed in very terse code.
One thing that threw me off was startComputation() / endComputation(). It seems you have to be explicit about when properties are being updated for views to update. I worry this might be error-prone vs react.js - if you forget an endComputation(), or an exception occurs outside of a try/finally, your views will freeze forever, no?
Generally, you only need to manually call start/endComputation when integrating to 3rd party code that needs to update Mithril managed bindings.
In the latest release, I tweaked the rendering aggressiveness to allow redraws after exceptions in event handlers, and I updated the docs wrt `try/finally` blocks in integration scenarios.
One other thing people could do that I don't think should go in Mithril core is to call m.redraw inside window.onerror.
Of course, I'm open to other suggestions, if you have any.
At least in my application, views are always bound to either an `m.prop` or an immutable value. What if you tracked when an `m.prop` was updated and queued up a redraw automatically? Redraws would be rate-limited via requestAnimationFrames/setTimeouts as usual.
I imagine this could have a significant performance impact, but don't know the mithril internals well enough to say.
EDIT: Thinking about it, the virtual DOM must be re-constructed on every redraw since the props aren't tied to a view. This probably wouldn't work well then.
The topic of side-effect incurring getter-setters was also brought up a few times in various forms. I wrote about my thoughts on it here ( https://github.com/lhorie/mithril.js/issues/78 )
As far as redrawing impact goes, Mithril does rate-limiting, so even attempting to subvert m.prop to force it to spam redraws should still perform ok.
To address your second comment: One distinction between the dispatcher and controllers in the MVC framework is that the dispatcher generally doesn't contain any business logic. It's essentially a hub for passing messages through: all sources of changes get funneled through the dispatcher, and the stores listen for those events, but the dispatcher doesn't actually modify or initiate the events. In my talk, I described it as the traffic controller of the system, which might be a more appropriate description.
As things get more complicated, the dispatcher may do more things at the framework level (Bill refers to the dispatcher handling dependencies between stores), but it stays separate from the application logic. In other words, we use the same dispatcher for multiple apps, so it plays a different role from the controller.
Hope that clarifies a bit, I'll see if we can make this distinction clearer in the documentation :)
Agreed. At some point, in a well designed app, you've got a data store, an I/O layer, and something that coordinates amongst the components. There are a million ways to decide where the "business logic" should live, whether application state and external data should be treated differently, or exactly how all the pieces are glued together. At the end of the day though, it's all pretty much the same paradigm.
It sounds to me like Flux goes with a model layer that handles all state (internal to the app and not), pushes most binding between pieces of the UI to the view layer, and uses a thin controller layer to mediate the other two layers and coordinate parts of the model layer with overlapping concerns.
I once had the epiphany that an entire web app is pretty much a whole bunch of nested or concatenated MVCs. I think the confusion arrives when people assume that there is one-and-only-one MVC and try to shoehorn all the parts of a new workflow into that rigid structure. I don't think that the proliferation of acronyms purporting to be something different helps with understanding.
To add to the acronym soup, what you described (nested MVC) is known as HMVC, and I agree that it's what most frameworks end up as, regardless of what they end up calling it. It works very well in my opinion, too, to the point where I'm currently working on a framework like that myself (as a learning exercise)
From first-hand experience, I can say that React+Flux has scaled well to 8+ developers over 800+ JS files and ~60k lines of code in a large single page app here at Facebook. I'm happy to answer any questions! Some things that we've struggled with:
1. All data should be kept in stores. You may have some local component state, but it shouldn't be anything you want to persist if the component is unmounted. We have tried using state several times, and always go back to keeping it in singleton stores. It also works better with the action-dispatcher pattern.
2. Now all your data is in stores, but how do you get it into the specific component that needs it? We started with large top level components which pull all the data needed for their children, and pass it down through props. This leads to a lot of cruft and irrelevant code in the intermediate components. What we settled on, for the most part, is components declaring and fetching the data they need themselves, except for some small, more generic components. Since most of our data is fetched asynchronously and cached, we've created mixins that make it easy to declare which data your component needs, and hook the fetching and listening for updates into the lifecycle methods (componentWillMount, etc).
3. Actions don't have callbacks, as they are by design fire-and-forget. So if you need to be notified when some item has finished being created, for example, you need to listen for the follow up action that the CREATE action fires (yeah, actions firing actions, a bit ugly). Even then, how do you know that CREATE_COMPLETED action correlates to the CREATE that you fired, and not another? Well, actions also come with a payload, so what we ended up doing was passing a context object into the payload and plumbing it all the way down into the CREATE_COMPLETED and CREATE_FAILED actions. Being really strict about actions is a major reason why Flux has scaled well for us.
Could you explain how one might modify a Backbone+React application to follow the Flux model? One still needs models, no? Does one simply "wrap them in a store", which passes information to the UI, rather than having them come directly from the models? Or do you get rid of backbone entirely, and... then where do you store/manage/sync your data? What's the correct model for communicating with the server?
Good question. I haven't used backbone at all, and only know what I've read about it. Flux is a complete replacement for backbone as far as I can tell.
As you say, the models live in stores, so building on my other comment, you would have an ArticleStore that is responsible for providing access to and caching all of the Article objects. As a rule, if you want to mutate data, you do so by calling an action (ArticleActions.update, for example). See my other comment for how the update flow works: https://news.ycombinator.com/item?id=7721381
If you want to fetch data, you go to the store (ArticleStore.getByID, or ArticleStore.query). The ArticleStore will then call into the ArticleDAO (data access abstraction) to fetch data asynchronously, and when it returns the ArticleStore incorporates the data into its cache and "informs", which is basically a pub-sub push (the views/components subscribe to the stores they want to get data from).
Is there a reason why data fetches (e.g. ArticleStore.getByID) don't just return a promise for the return data? I'm guessing the current implementation doesn't return anything, it just emits an event for informing data loaded.
The one plus of the event approach I can think of is that if one component causes new data to load on the client (an article is updated), none of the components that rely on that article will show stale data - that is to say, it's extraordinarily difficult for components looking at the same data to ever be out of sync.
> I haven't used backbone at all, and only know what I've read about it
I find it mind-blowing: do you only use tools made inside Facebook? Do you develop these frameworks or just use them? If you don't know Backbone which frameworks did you learn of this kind? I'm curious.
I am another a developer that has never used Backbone (until couple months ago at least). Someone could assume that is because I develop trivial apps, but in reality the opposite is true.
Tools such as jQuery and Backbone seem ubiquitous on the web because they are a perfect fit for addressing common problems in the webpage/ajax/dynamic content area. However, there is a sizeable group of developers who work with rich applications, intranet portals, line-of-business apps that required significantly more structure and skeleton than Backbone/Marionette provide. This importance of using an overarching "framework" rather than a "library" increases in proportion to the team size and the code surface area.
In my case, we started with YUI and then migrated to ExtJS. This was before Backbone existed, although it would not have made a difference. In recent years we have evaluated Angular and Ember but did not find compelling reasons to migrate (for a greenfield project the choice may be different, but migrating significant apps carries a significant cost).
Both YIU and ExtJS provided everything we needed under one roof, and there was no use for jQuery or Backbone&co. The downsize of a mega-framework like ExtJS is the overhead - it is ill suited for a simple app. Couple months ago I started using Backbone & co for small isolated mini-apps, but I cannot wait to find a suitable replacement because it feels clumsy and backwards.
To be clear, I am not saying it is impossible to build complex apps primarily driven by Backbone - I know people who have done that (often to their own peril). I am saying that it is entirely possible to be a Facebook-level engineer working on complex applications and have zero experience with Backbone as it is great and solving problems you do not have.
I'm developing a SaaS application that is essentially a LoB in wolf's clothing and I went with Knockout for much the same reason, I'm not a strong javascript programmer but I rapidly realised if I stuck with just jQuery I was going to be in maintenance hell.
I've really enjoyed working with knockout (so much so I've even written "components" with it (ajax file handler a la gmails but with previews etc) and I've found it to have just enough structure for the stuff I need on the front side.
It surprises me that it's not more popular than it is but maybe there are reasons for that I simply don't know or understand.
I didn't develop React, I just use it at Facebook. I started Javascript development at Microsoft where we were using mostly our own homegrown stack on www.so.cl. When I moved to Facebook, the entire project I'm working on is in React. Since I haven't built many single page apps in my spare time, I have had very little exposure to Backbone. Rest assured the React devs certainly are familiar with most of the Javascript frameworks out there ;)
I've used Angular extensively, React and Spine.js, but I never used Backbone either. It's got nothing to do with where someone works, as far as I'm aware; I certainly don't work at Facebook, anyway.
Now you just write handlers to modify the collection and sync. setState takes care of triggering a render when your collection changes. As a bonus, you might want to render something different while in "loading" state.
If you don't want the component to own the collection (if you want to share a collection between multiple components), just pass it as a prop from a parent component; otherwise you can instantiate a new collection on getDefaultProps.
I believe the latest version of React does autobinding on component methods, so you shouldn't need .bind(this). There should be a warning letting you know this (unless it is different for the public version :/).
Thanks for the encouragement! I'm using React to build a new project at work and so far I've been very satisfied with how much it gets out of the way.
Perhaps promises would be a good solution to your action spaghetti? They can deliver progress, completed, and error messages targeted to the place that cares about them, rather than passing context through the entire rest of the system.
We had talked about using promises at a lower level, but for actions it is very desired for them to be fire and forget. The view (component) fires the action in response to some user interaction (e.g. clicking a button). The action is dispatched, and the stores listen for that action, update themselves, and then "inform", which notifies the component that its data has changed. In the case of data mutations, our action modules end up having a fair amount of logic in them:
1) User clicks a button to favorite an object, lets say an Article
2) View (component) listens for the click handler and calls ArticleActions.favorite(articleID)
3) ArticleActions.favorite fires a preliminary ARTICLE_UPDATE action, for any stores that want to update optimistically
4) ArticleActions.favorite calls into the ArticleDAO (the data access abstraction) via ArticleDAO.update, a function that takes an ID, some updates, and two callbacks, success and error.
5) In the success callback, ArticleActions.favorites fires off ARTICLE_UPDATE_COMPLETED, along with the updated article object
6) In the error callback, ArticleAction.favorites fires off ARTICLE_UPDATE_FAILED, along with the error
7) Stores listen for the COMPLETED and FAILED actions to know when to update their data, including stores that show error notices
As you can see,the action layer itself should not really be accepting callbacks/returning promises. It might be easier, however, if the DAO layer returned promises, and this is something we have talked about migrating to.
From what I understand from your description, it seems that data access, server requests, etc. fall under the responsibility of Actions? Why not offload it to the Store?
e.g: ArticleActions.favorite fires off an ARTICLE_UPDATE action, the ArticleStore receives this and does the appropriate ArticleDAO asynchronous call and when done emits a change event to update any Views.
We split out data mutations from data access (see https://news.ycombinator.com/item?id=7721542). One reason is that multiple stores may be interested in the COMPLETE calls. One example is when you have a store that tracks which items in a list are selected; if one of the item is deleted, this separate store, say ArticleSelectionStore, needs to handle the ARTICLE_DELETE_COMPLETED event to unselect that article.
I.e: they update themselves (eagerly, hence the name) based on changes from the data-access layer. They're able to choose themselves which transformations to do on the data in order for views/components to query them efficiently.
It's unclear to me why it's important that actions be fire-and-forget.
Is there no scenario in which you would want to show (next to the Favorite button) that favoriting failed but not care about other article-update failures?
Sure, that is absolutely a valid scenario. This could be accomplished using a context object as I explained in my first comment, something like {changedFields: ['isFavorite'], error: 'some error message'}. You'd then have some store that listens for article update failures and saves the error message somewhere, and then your favorite button view would pull the data from there.
The importance of keeping actions fire-and-forget is that data must live in the store; if actions have callbacks, the views would be using state to keep themselves updated with data that should be in the store.
Om[1], a ClojureScript wrapper over React, solved problem 2 with Cursors[2]. Each component gets a reference into the data store, and can update its own data using the cursor almost transparently. In this case, it isn't that components declare what data they need, but their parents provide them with a cursor when instantiating them.
Don't cursors still suffer from the problem of intermediate components needing to pass on state they don't use? For example, a list of articles might have an author for each article. Then you have the following components: App > ArticleList > ArticleItem. So App will have to have a list of authors and a list of articles, and pass that down to ArticleList, which, for each Article, will instantiate an ArticleItem and also find the correct author to pass to it. Instead, what if ArticleItem just got an Article, and then asked the author store to give it the Author? ArticleList shouldn't really care about the authors.
The example is a bit contrived, but hopefully you see the problem? Especially consider if there are a couple more layers between App and whatever leaf component is rendering something,
In Om, components can still look at the global state without using a cursor if they want. You could have a top level "authors" key in the data store that the ArticleItems would have hardcoded into them. They could look up the author themselves as you describe by looking up the "authors" key in the global data state.
Alternately, you can pass multiple cursors[1] to a component. In your example, you could pass each ArticleItem a cursor pointing to the right article, and a more general cursor pointing to the author store. This would eliminate the hard coding of "authors", but it would require ArticleList to pass along some sort of "additional-data" cursor.
I use cursors in JavaScript with a flux architecture, and yes, prop passing is a problem for us. I've rationalized it as "there are more wires, but the wires are straight and bundled, not a ratsnest of spaghetti references"
How about passing a createArticleTag() function to ArticleList, so that access to the AuthorStore gets passed down via the closure rather than explicitly?
I'm currently using React with ClojureScript/Om and I really like working with it. However I'm never really sure how to model non-persistent ui changes. I.e. a user clicks a button that sets the display property of the target note to "block" so that more content is visible. I wouldn't want to keep this in the main store. So I'm pondering whether to store this in component local state and have the change appear with a React rerender, or whether it is fine to simply access the DOM and set a different style property? (i.e. getElementById...)
No! Never access the DOM directly if you can help it. I forgot to mention, display things like you said are one of the only places we really use local state. A dropdown for instance, might have a state called "isOpen". In your click handler, you'd call this.setState({isOpen: true}), and in render, you'd only render the dropdown if isOpen is true.
> What we settled on, for the most part, is components declaring and fetching the data they need themselves, except for some small, more generic components.
How does this interact with shouldComponentUpdate? Generally it seems that when moving data out of props/state, it's harder to take advantage of the performance hooks that React gives you because you don't have the old and new data to compare when rerendering.
I meant to mention this; the helper mixin I was referring to is called StateFromStore, and it fetches data from stores and shoves it into state via setState.
Answering my own question; you have actions call DAO objects directly, which make any async calls, which the action can then callback into and fire off another action.
Adding another question; do you try to prevent stores from knowing about each other at all?
do you just not use react state at all then? I have a similar sized app as you, we started off naively using state at various levels, over time we refactored the state higher and higher, and now we are at the point where all the state is kept only at the root of the view hierarchy (we use cursors). We're about one step away from lifting even that state out of react and into a store layer that has no react dependencies.
I'd love to hear some more about the design of your data store: are these cursors basically paths to data? Are they similar to Om's concept of cursors? https://github.com/swannodette/om/wiki/Cursors
We do still use React state, especially for view state like whether a piece of text is expanded or whether the viewer has toggled the grid or list view on a table. But as you said, most of our data is pulled into a store layer that doesn't have React dependencies.
The functional flux/react architectural style is truly excellent. Over the last few months, Andrey Popp implemented a declarative form engine using React and it's much simpler to reason about than our older JQuery equivalent. Having undo/redo emerge as an almost-free feature from this architecture is super useful.
I could figure out jQuery back in the day pretty easily.
Then we started using Angular and I have tried and tried to understand all its concepts my brain just couldn't keep track of list of: dom transclusion, scopes, controllers, directives, services, dependency injections and so on. More experienced JS developers loved and had no problem picking it up.
But after watching a few videos about React.js and worked through tutorials, I really started to understand better. I really like the concepts and how this library is put together. So far this is looking really good.
I like the approach to managing information flow that is outlined here, but it's over-stating the case to say that lack of such control is a failing of the Model-View-Controller architecture. There are patterns you're supposed to use with MVC such as the 'V' pattern, where information flows from the views via the controllers to the models, where the models update themselves, and then update the controllers which then update the views. Visually it looks like:
View receives event-------\..../----views render new state
Controllers handle event---\../--controllers mediate new state
Models react to new data----\/--models update controllers
The Sproutcore javascript MVC framework espouses this, for example, and I'm sure many other MVC frameworks do too.
I'd be really interested to see how Flux would augment statecharts...
I should have added - these are the only actions you can take at each level, so no arbitrary updating of views when you're in the left-hand side of the controller stage, for example. That stops the arbitrary circular information flows cited in the flux introduction.
EDIT: So far, the video is much more helpful to me in terms of bringing the concepts of Flux to life. Jing does a terrific job explaining in my opinion.
Seems like the dispatcher is a more manual version of the Ember/Sproutcore run-loop. The advantage I see to the run-loop is that it batches all UI updates so they happen only once, where the dispatcher simply batches up model changes to reduce the number of redraws. Maybe I didn't fully understand what the dispatcher does though...
In this architecture the batching occurs at the seam between flux and react (setState) which can be flushed with an arbitrary strategy (ie the default react run loop, or famous, or your own etc)
I intuitively designed a very similar architecture to Flux & React, last year. It's a lightweight PHP+JS framework. (used in this pet project: http://www.youtube.com/watch?v=-Lta5xSj4mY )
Really good video. Made me clearly understand that React is Javascript as if it were to natively support the spreadsheet / reactive programming model. Update a cell, dependent cells / signals get updated automagically. The huge upside is that cells / signals are plain JS values that can be composed using plain JS functions, and all the JS tooling just works.
Technically, this is done by simply using an unique dirty bit, which triggers the recomputation of the rendered scene. Which is fast enough in practice, and even supports unchanged hints to reduce the costs of expensive DOM updates.
This is eerly similar to 3d scene rendering. The unchanged hints are the equivalent of viewport clipping, though require more work from the coder to setup right.
I've been doing a fair amount of Angular development at work and recently been getting into React on the side (and now introducing it for a new project at work!). I'd say they have the same general goal - build your app with declarative and expressive markup. However, Angular tries to shoehorn custom components into the regular DOM, which gets problematic in many cases.
For instance: the directives `ng-show` and `ng-hide` simply apply CSS styles. As far as I know it's not possible to completely remove an Angular component from the DOM and then replace it later (without getting too deep into imperative JS). However, that's trivial to do in React.
Angular chokes on large data sets, probably because it's doing everything directly on the DOM. React handles huge numbers of records effortlessly. Getting comparable performance with Angular requires much more imperative voodoo than I'd like out of an ostensibly declarative system.
I also find it harder to reason about Angular's data flow, since there are many ways to pass things around: nested scope, isolate scope, sibling scope, same scope, dependency injection to name a few. Finding where a particular handler or value comes from is sometimes quite the hunt. With React, it's all pretty much self-contained.
Ah yes, that's exactly what `ng-if` is for. Thanks for pointing that out. The only downside is that it creates another nested scope, which can be surprising.
Watching the video I heard the speaker say unit testing is rather easy, since state is always sitting next to code in your components. This is great and sounds logical: test different input states and check against expected / consistent output.
What I'm wondering, given the unidirectional flow and the design of flux as a whole: would Integration Testing be needed at all anymore?
I mean, there's no state outside of the component that could possibly influence the component's consistency. Therefore, all needed testing could be done by simply unit-testing all components in isolation.
It seems you could just pass the stores into the components as data - that's really what they represent. I'm not clear why this approach is eschewing that basic principle in favor of singletons.
The stores contain the data, but they also include the logic for updating that data. For example, a store of all the comments would also subscribe to events for new comments and add them into the comment thread at the correct location. We've found it more maintainable to keep the application logic for a set of data and the data itself in one place rather than having a model that external pieces can modify.
Can this architecture implement the client-side offline update while network is down? (User clicks a "Like", its marked as Liked even if network is down...)
Yup, we've used it that way, and it's easy to implement due to the fact that all changes are expressed as a payload object, which allows you to serialize actions. Optimistic actions (showing the Like when the server hasn't confirmed yet) are built the same way - the stores just need to resolve between the optimistic/offline update and the eventual server response.
It's just a store. You could write your implementation to store the data anywhere - localstorage, a REST service, the one in the example just keeps it in memory as an array.
One thing that raise my eyebrows about this article though is how says that Flux eschews MVC, and then goes on to say that it has stores that are "somewhat similar to a model", and "the dispatcher exposes a method that allows a view to trigger a dispatch to the stores", which, as far as classic MVC goes, is the exact definition of a controller. What they call controller-view is, imho, just an implementation detail within the view layer: in classic MVC, views were responsible for subscribing to model observables, or, as the article puts it, it "listens for events that are broadcast by the stores that it depends on".
When talking about this architecture with the Mithril community, I find that referring back to the MVC pattern makes it even easier to reason about the flow of data: it's basically M -> V -> C -> M ...
It's unfortunate that the general understanding of the MVC pattern became misunderstood over the course of many frameworks. The whole point of design patterns should be that we could just name them for people to "get" it, rather than having to reinvent new ways of naming things and having everyone relearn the nomenclature.