Personally I'm a huge fan of Observables or (Singles which is a single item Observable).
One of the most confusing things with Promises vs Observables or any sort of deferred value is whether it is "hot" or "cold".
Most deferred objects like Futures in Java and Promises in other languages are hot which basically means something is already running in the background.
There are many reasons why I prefer the cold model but mainly boils down to be more explicit, better composition, backpressure handling, and overall better predictability.
I wish articles talked about this a little more of hot vs cold.
Indeed, this has been solved in Swift with ReactiveCocoa 3. SignalProducer<Value, Error> is a cold signal that must be started/observed to take effect. Signal<Value, Error> is hot. We have the lift operator that can transform any signal operator into a signal producer operator so you don't have to duplicate everything.
If anyone is wondering:
Cold signals don't do work until you ask for it; think starting a new network request. They don't send any values until someone starts them.
Hot signals send values at their own leisure and often represent things out of your control, like button taps or receiving a push notification. They send values whether anyone is listening or not.
Observables are actually terrible at backpressure as they're one of the most eager models around - once you have subscribed to an observable _it_ decides when to push and how much and you can't handle backpressure very well.
Quoting Erik Meijer who invented Rx Observables:
> My opinion about back pressure is that you either have a truly reactive system where the producer is fully in charge and the onus is on the consumer to keep up; or a truly interactive the consumer is fully in charge and the onus is on the producer to keep up. This is one of the cases where having different types of collections makes sense (just like we have different collections like lists, maps, sets, ... with different performance characteristics and different operations)
You've got certain misunderstandings there. Observables aren't eager, what you want to say is that they are push-based.
But that doesn't mean they are terrible at back-pressure, in fact they can be better at back-pressure than any pull based model you can think of, because the communication is more efficient. Back-pressure is the scope of the http://reactive-streams.org protocol and I've also built a library inspired by Rx that has back-pressured observables, started before RxJava went this route as well: https://monix.io
Funny enough back-pressure doesn't help with just "slow consumers". In Rx.NET flatMap is an alias for mergeMap because it is safer than concatMap, as concatMap would require buffering without back-pressure. Introduce back-pressure in the protocol, and concatMap becomes safer and much more efficient if done wisely. Which is cool because it is concatMap that's the monadic bind. And many other operations become more efficient and easier to implement because you no longer need to do buffering all over the place.
In terms of those AsyncIterators, or whatever they are called, I tried that route but the model is terrible as you're forced to introduce asynchronous boundaries on each pull. This means the model is very inefficient.
And I'm going to make another claim. Erik Meijer doesn't know what he's talking about in this case, in spite of how awesome he is and this opinion is probably the reason for why the Rx.NET library is trailing behind most Rx implementations. If I'd have to use .NET for a project, I'd probably end up reimplementing Monix, as Rx.NET isn't suitable for my use-cases at all.
No, they're eager - I had this mixed up too until Erik Meijer set me straight:
> Now back to your question. I do not understand what you mean by “observables being fundamentally lazy”. I’d need to understand what you mean by that before I can give a coherent answer. Iterables are essentially lazy, because nothing happens before you call “moveNext”, but I’d categorize observables as (hyper) strict since “onNext” gets called wether or not the consumer wants it (so to speak).
I think I understand the eager confusion. You mean eager because it is a push model and not a pull model like iterator which is lazy.
They are eager once Subscribed. The lazy part that I and the previous poster was alluding to is that most observables do nothing till they are subscribed to unless they are hot where as a Promise and Future are executed/scheduled immediately and the value is pushed/pulled (respectively).
I totally agree that Observables are not that great for backpressure but really there isn't a silver bullet on that one regardless (many smart people have tried to come up with solutions that will work across domains but have failed). As you mentioned you just have to pick the right tool for the job/requirements. I for one prefer a broker ACK model like RabbitMQ but many do not like queues as it just shoves the problem some where else.
The addons in the newer RxJava version where the the consumer can request a number of items and the producer might not overflow the window makes that better. The same model is also used in reactive streams.
But I would agree Rx is not the best possible model for streams with backpressure. Besides signaling for backpressure I also have the concern that many operators don't have support for backpressure, automatic backpressure to multiple consumers is hairy and that the push based system is also a lot more prone to resource (or information) leaks - because the producer might push something into a buffer which is never read by anyone if the consumers cancel the stream.
I still think Rx is a great tool for some use cases (e.g. inside UIs for reacting to button presses, state changes, etc.) and I'm currently using it a lot in angular2 to connect between information providers and widgets.
But for lower level data streams with backpressure between single sources and sinks, distributing resources and data exchange between multiple threads I'm now favoring other approaches.
I've experimented with a lot of designs over the last years and in the meantime I favor pull based approaches (like Future<T> getNextItem() or in Go simply read(buffer) methods there. I get backpressure naturally if I only invoke a new read after the last one has finished, I can even reuse a single buffer for all reads and composition also works through normal functions. Drawback might be throughput, but on bottlenecks one can still introduce a buffer.
Node seems to favor it's own stream implementation (ReadableStream and WriteableStream instead of Rx, which is especially designed for this use-cases, but it's dual nature (switch between push and pull based) can make it hard to understand in some situations. The whatwg streams approach could be promising, but doesn't seem to be fully specified yet.
The notion of hot versus cold for Promises/Futures doesn't make much sense, because you're not sharing a connection, but a result. In other words you're thinking of memoization of the result. Also backpressure doesn't make sense when you're pushing a single result, as there's nothing left to apply backpressure for.
You mentioned Singles. I actually think Singles are terrible because they are observables. You see, while it's a poetic notion, the use cases are different. In particular, Futures tend to be used in tail recursive loops whereas observables are not used like that. So Futures can and must be stack safe (in flatMap), but Observables cannot be stack safe by definition (since you always have a last onComplete event).
And in browsers Domenic worked on something similar which was added later (because browsers) but will hopefully be standard in all browsers soon (Chrome, since 49):
You mention that - but it's worth pointing out upfront it's a solved problem.
Also:
> The problem is that once again, Promises will swallow subsequent resolutions and more concerningly, rejections. There might be errors that are never logged!
This is arguably a much better design than the alternative - which is to violate the "callback" contract.
I'm glad `unhandledRejection` exists (and I thank you for your work).
Unlike Chrome, without setting up the handler manually, you still don't find out about unhandled rejections. Maybe it's something we could enable during development (`NODE_ENV` = 'development' perhaps).
I've never quite understood why unhandled rejection is a problem by itself. It can only arise if there is a promise that has rejected but has nothing waiting on it. But that means there was nothing waiting for it to resolve successfully either, which is the real bug in the application.
It builds a function dynamically and then compiles it and lets the JIT optimize it - it incurs very little overhead.
This is impossible if your only tool is the promise constructor. That said - there are packages that do this without bluebird - like thenify.
Also note - you can use bluebird for promisification and native promises for other stuff if you'd like and it'd work seamlessly as bluebird passes the Promises/A+ test suite.
I'm not sure I'd necessarily call ```promisifyAll()``` "blazing fast" but in all fairness it's a cost incurred up front and that's it so it's not that bad.
The call itself is relatively expensive, but as you say it's a one off call at startup time and the methods it produces are faster than explicit promisification using `new Promise`, they are blazing fast :)
Bluebird is mainly designed for node (V8) and promise performance can make a massive difference on the server. On the client there tend to be fewer async calls and so promise performance is usually less of a factor, but obviously, it depends what you're doing.
If it takes 100ms CPU time to process 10 promises and 1 request involves creating 10 promises, the server can then only handle 100 requests per second at theoretical max.
IO time doesn't matter as much (for throughput) because IO is asynchronous so the server is free to process other requests while waiting for it. But CPU time is only being used for 1 thing at at time.
So to get e.g. 10k rps out of a node server, handling 100000 promises must take well under a second of CPU time. Looking at the benchmark results you can see how many implementations will bottleneck this.
Check out the Bluebird benchmarks [0], or you can easily try it for yourself - create a simple web server which does some makework async task, like stating a bunch of files, then benchmark with ab or something better.
You haven't provided any evidence so far. You owe evidence because you made a claim. If you are unsure whether Bluebird's Promise performance makes a difference in a real-world scenario, why make such a claim?
You are asking a question, I do not owe you an answer, it's up to you to find out whether this stuff matters for your use case. I've done these tests myself, it is trivial for you to do the same. Do your own homework.
> You are asking a question, I do not owe you an answer
I agree! But you did answer it:
> Bluebird is mainly designed for node (V8) and promise performance can make a massive difference on the server.
If I'm sacrificing simplicity (by using a non-native Promise implementation with a very large API) because of performance improvements, I want data (proof) of these performance improvements. If you don't have the data that's fine! No harm done, I'll keep waiting until someone does.
There is never a need to write this type of terrible code if you're using promises. The solution this is not async/await. I agree that async/await and generators are nice, but saying they solve a problem that is not actually a problem seems a little silly.
This is typical of many technical blog posts: Overcomplicate a solution that could be done much more cleanly and clearly in order to show off a new feature, when in reality that feature is actually not that useful.
You're still nesting. Considering the whole point the author made was that something went wrong with `filterUsersWithFriends`, then you would simply handle that in the next `.then`... So something like:
I think the problem with your solution is that trySomethingElse won't have access to the users variable unless the filterUsersWithFriends rejection explicitly passes it through.
I dont think this quite works. For one thing, the users references won't be available when you try to use it in trySomethingElse(users), because it's scoped to that little arrow function above. For another, it has the effect of catching any error thrown by getUsers() and calling trySomethingElse, whereas the original code simply did nothing in that case.
Been using async/await like sweetjs macro "task" in production for around 1.5yrs (https://github.com/srikumarks/cspjs). This code would read -
task getUsersWithFriends() {
users <- getUsers();
catch (e) {
return trySomethingElse(users);
}
// Everything below is protected by the catch above.
withFriends <- filterUsersWithFriends(users);
return withFriends;
}
.. with errors propagating to nodejs style callbacks automatically. No need for overheads like "promisifyAll" either. cspjs turns the task into a state machine. Better semantics for catch and finally where you can retry operation from within catch clause (ex: exponential backoff).
disclosure: author of cspjs. couldn't help shameless plug since this was being touted as the "future" in the original post!
This has a name, and apparently a lot of people do it stackoverflow.com/questions/23803743/what-is-the-explicit-promise-construction-antipattern-and-how-do-i-avoid-it/23803744#23803744
Hm, looks interesting. I wonder if it would be possible to monkey-patch the defer itself in something like this. So that defer itself is handling it, but only if you do the the monkey-patching per file and not globally.
defer = require 'defer-esc'
await req.get 'https://x.io', defer err, http, res # if err will call cb err
await req.get "https://x.io?hash=#{res.hash}", defer err, http, res # if err will call cb err
return cb res
> Simultaneously, Observable continues to make progress within TC39 to become a first-class construct of the language.
Worth pointing out that the TC discussed observables with the cancellation you describe here and rejected them less than a week ago. Observables did not progress to stage 2. Currently we are bikeshedding design at the es-observable and es-cancel-token repos (under zenparsing) - contributions are welcome.
Hopefully, a compromise everyone is happy about will be reached soon.
You also might want to mention async-iterators which are a stage 2 proposal.
import Mutex from 'lib/mutex';
class MyObject {
constructor() {
this.m = new Mutex();
}
async someNetworkCall() {
let unlock = await this.m.lock();
doSomethingAsynchronous().then((result) => {
// handle result
unlock();
}).catch((err) => {
// handle error
unlock();
});
}
}
let mo = new MyObject();
mo.someNetworkCall();
// The next call will wait until the first call is finished
mo.someNetworkCall();
I love it. We use it in our production codebase at Codecademy.
It's quite an interesting approach for limiting the amount of concurrency in singlethreaded environments. I first saw that in the documentation of the C++ seastar library which is also mostly a singlethreaded eventloop. See the chapter "Limiting parallelism with semaphores" here: http://docs.seastar-project.org/master/md_doc_tutorial.html
Their semaphore approach allows then to start an operation exactly X amounts of time and the next one will wait until another one finishes. Seems like an interesting approach. e.g. to limit the number of parallel connections.
Why do you need mutexes in a single threaded context ? If you need sequential async operations that's what async await are for. What if you need to return something from someNetworkCall ? yes you're f..cked in that case.
Presumably instances of `MyObject` can be accessed by functions which are running while this function is suspended, waiting for the network call. It's single threaded, as in only one thing happens at a time from JS's point of view, but other functions can be invoked (by a request coming in, or a UI event etc) while an asynchronous operation is awaiting completion.
That's not true. You could return (either the promise being used or a directly awaited value) and that value would be made available through await / promise as well.
This is a good overview of promises and async / await. The initial thesis of this seems to be that you need promises or async / await to avoid callback hell which I disagree with. You can still write good code, not use promises or async / await and not run into callback hell.
The example given in the article of callback hell isn't even a very good one. Why are we using an asynchronous method to fetch users, another asynchronous method to filter them and yet a third one to get their likes and return those. If that's the ultimate goal then, depending on your backend, you can should be able to make that in a single call.
But let's ignore the poor example in the article. Let's say we need to make 3 calls to 3 asynchronous functions and the result ultimately needs to be returned after all 3 functions are finished executing. Do you just nest them like the article? Absolutely not.
Essentially my point is: you can always architect around callback hell and make it better. I've already gotten into projects where I would now consider them "promise hell" where every method returns a promise of a promise of a promise of a promise of a promise and it's just maddening to debug.
Alternatives to callback hell are always context specific. There is no one-shot-cure-all, in my opinion (not even promises or async / await) but there are plenty of steps you can take.
- Can I separate or combine at least two of these asynchronous functions?
- Can I use a pseudo parallel or serial processing pattern ala async.js?
- Can I use a messaging pattern where each message is handled separately with a single callback ala msngr.js or postal.js?
- Can I simply create a response building pattern where many asynchronous methods write to the same response object and, when it's complete, return it?
- Does it make sense to use a promise here? What about the async / await pattern?
I'm pretty new at promises, but doesn't any appearance of a "promise of a promise" indicate a trivial mis-use of promises? Isn't the whole point of promises that they can be chained together without nesting?
Indeed. And if you generalize this idea only slightly, you get ...... monads!
Maybe promises will be the vehicle that makes understanding the usefulness of monads available for the masses.
Alternatively, promises may fail the acceptance test for the same reason as monads - because the average developer simply doesn't care / doesn't get it.
To expound on this connection: the basic idea is that a Promise has a `.then(fn)` method call which takes this `fn`, which returns a value. If that value is not a Promise, then it is lifted to a new promise; either way that promise is what's returned by the .then().
In Haskell, for type safety, you have to explicitly write this conversion and the function which does it has the somewhat-confusing name `return`. The `.then()` method then has the name `>>=`, pronounced "bind".
If you have these, you have a template for type-safe functional metaprogramming. Given any x, you can produce "a program which produces an x" (return), and given any function from an x to a program which produces a y, you can bind that onto a program which produces an x, to produce a program which produces a y:
return :: x -> ProgramWhichProduces x
(>>=) :: ProgramWhichProduces x -> (x -> ProgramWhichProduces y) -> ProgramWhichProduces y
This was huge historically because you can still metaprogram in a purely functional language, even though you cannot do things which have side-effects. So I can give you a pure primitive like `println :: String -> ProgramWhichProduces Nothing` and you can build `println("Hello, world!")` with it, and there are no side-effects in running that function on that input, so this is purely functional. In this way the act of actually running a `ProgramWhichProduces x` as an executable is deferred until the language is no longer in play: when the compiler is run, it looks for a `ProgramWhichProduces Nothing` called `main` and writes that out as the actual executable generated by the compiler.
Monad refers then to precisely this design pattern of saying "I have some trivial embedding `x -> m x` into my problem domain `m` and I have some way to combine an `m x` with an `x -> m y` to produce an `m y`, hey, that's a monad!" ... One example is list comprehensions; there is a trivial way to embed any element as a singleton list [x] and then you can always use any function x -> [y] (called a "transducer" in Clojure) to process a list [x], concatenating together all of the [y]'s you get along the way. With ES6 generators this looks like:
function* forEach(iterator_x, ys_from_x) {
for (let x of iterator_x) {
yield* ys_from_x(x);
}
}
Hey, that's a monad. You can write a list comprehension like:
// equivalent to Python's [f(x, y) for x of list_x for y of list_y if predicate(x, y)]
forEach(list_x, x => forEach(list_y, y => predicate(x,y) ? [f(x, y)] : []))
In Haskell you can write all monads with a special syntax called `do`-notation; you can write the above in any of three ways:
[f x y | x <- list_x, y <- list_y, predicate x y]
-- which desugars to...
do
x <- list_x
y <- list_y
if predicate x y then [undefined] else []
return (f x y)
-- which desugars to almost exactly the JS above...
list_x >>= \x ->
list_y >>= \y ->
(if predicate x y then [undefined] else []) >>= \_ ->
return (f x y)
So that's the general pattern of monads and do-notation in a short, sweet lesson.
If you are not trying to use Haskell (which has monads as a workaround for certain problems) then what is the point having monads? The average developer doesn't care because they are working in an environment where monads aren't needed.
Trying to drag monads into arbitrary languages like Javascript isn't some kind of best practice, it's cargo culting.
Monad is a mathematical term for certain forms of (categorical) abstraction. Yes, Haskell actively uses the term "monad" in the language itself to discuss these things, but that doesn't make them a "Haskell thing" any more than Fortran has mathematical formulas and thus all math is a "Fortran thing".
Monads are also not a "workaround for certain problems", contrary to what you hear sometimes, but an attempt to generalize a number of seemingly unrelated concepts into a uniform interface.
Uniform interfaces are good for programmers. This is the magic that monads bring to other languages as we get a handle of what the monad is for and what it does. You don't have to know what a monad is to make use of it. You don't need to know that Promises are a "continuation monad" to take advantage of the fact that async/await are simple tools to transform the "continuation monad" into something resembling synchronous code, which is a big win.
(Meanwhile Haskell points out that you can go simpler than async/await for your monad bindings if you like and use the same constructs for other monads too, but language developers are taking this one step at a time and trying to hit a happy medium where they have the power that monad binding gives but the discoverability and debuggability of more bespoke solutions to individual monads...)
Needless to say, it's not cargo culting so much as learning more about the very mathematical fundamentals of programming and using those fundamentals to build better languages with better features for better programs.
Haskell doesn't have monads as a work-around. They are a useful abstraction because they capture a common semantic pattern found in many programming concepts. Nullable types, error handling, sequential execution, and asynchronous event handling are all concepts that can be described monadically.
By describing them as monads and making the monadic pattern a core idea of the language, you can write algorithms that operate on any monad, so that you can write one function and have it be useful for both managing the execution of IO and error handling in a generic way.
People have an aversion to monads because a the concept of a monad has no grounds in reality, kind of like complex numbers. It is an abstraction that is useful not because it describes anything, but because it is simply a useful way to think.
> If you are not trying to use Haskell (...) then what is the point having monads?
I'm not sure how to answer this question, as my previous comment actually is the answer to your question.
Promises are a great example. They are a mechanism that provides better encapsulation and is still easier to compose than using pure callbacks.
(IIRC, Haskell made a similar transition: Monads haven't been in Haskell forever, many other ways have been tried before, but in the end turned out to be quite cumbersome and ugly in larger code bases. The introduction of monads was generally perceived as relieve in Haskell, and are becoming more and more popular in other non-lazy functional languages as well, such as OCaml.)
Check the Elm language. It built on top of monad ideas (but the language author avoids using the term for good reasons). It make programming web pages very compostable, scalable and maintainable. It shows that monads are really good fit with event-driven programming.
You might as well ask what's the point of having Iterable<> when you're programming in C :)
It's not that you strictly _need_ monads or Iterable interfaces, it's that they represent an abstract concept which allows you to solve problems in a generic and type-safe way
I wouldn't probably "drag monads" into Javascript, but I'd surely appreciate Functors when programming in Javascript :)
A Monad is a general-purpose mathematical structure, not just a workaround. GP gave you an example of something they're useful for, which you completely ignored.
> Isn't the whole point of promises that they can be chained together without nesting?
Yes, it's an excedingly common misunderstanding and probably one of the main causes of people being "turned off" by promises. It's kind of frustrating to see so many blog posts making this mistake when critiquing them.
Correct but once you get some functions returning promises and other functions returning actual values then your own code base can become confusing. I constantly find myself in these types of code bases trying to find function definitions so I know what it returns.
You can try and make 100% of your asynchronous methods return promises but what happens when you need to work with a library that can't use promises? Do you write a callback for that? Now you have a mixture and sure a few of these no big deal but when the code base balloons and you start having one offs all over the place now you need to figure out what returns what and code specifically to that.
Maybe if promises were in ECMAScript from the beginning then everyone would have used them and it wouldn't be a huge deal but right now it can get a bit frustrating, in my opinion. The stack traces of these nested promises are never fun to dive into as well though that likely depends on whether you're using a polyfill or not.
I'm not quite sure I understand you, but it sounds like a case for `Promise.resolve(value)`[1] – that's what I do whenever I don't know if something will be a Promise or an already-resolved value. `Promise.resolve(value)` returns a Promise that immediately resolves with `value` or whatever `value` resolves to if it's a Promise – no need to do any checks yourself, no need to nest Promises inside Promises inside Promises.
For example, imagine you have a charting library and you want to draw a chart, but you don't know if the data is available already or if it's being loaded asynchronously. You can just write:
// `drawChart(data)` is a synchronous function
// `data` may be the data itself or a promise that resolves to the data
Promise.resolve(data).then(drawChart)
For me, this is actually the killer feature of Promises, avoiding callback hell is just a nice side-effect. Once you start thinking of Promises as placeholders for values that may or may not be known already (rather than just an abstraction of the `callback(err, value)` pattern), you can write some really elegant code.
Fair enough that would take care of my issue for the most part. Native implementations of promises should work well with that. Seems most of the polyfills end up using setTimeout() which is kinda slow[1] and one of the reasons I don't typically like working with them (waiting for more ubiquitous support).
>This is a good overview of promises and async / await. The initial thesis of this seems to be that you need promises or async / await to avoid callback hell which I disagree with. You can still write good code, not use promises or async / await and not run into callback hell.
Even if you could, you shouldn't. It's busy work -- manually doing the computer's work.
What I think people sometimes forget about async/Promises vs callbacks is that callbacks-style define a code structure that is hard to maintain by multiple developers, since it leaves a lot of power to the way you write your code. Not every team has 5 experienced JS developers, so at one point implementing a small feature by a team member would become : `Hey, let's just nest another callback in there. It takes 1 line of code` instead of implementing it in a readable way.
Promises solve this, but indeed there are a couple of problems there with debugging. Honestly I think they give more readable code than whatever callback implementation you invent to avoid the callback hell. Libraries as bluebird can make your promise operations really short and even one liners, without performance hit.
Finally now we have async, which I hope will be popular enough in the future to become at least as much popular as Promises are now.
Even when implemented in a readable way, callbacks can still become hard to maintain. When you use properly separate functions (instead of anonymous ones), you typically have to embed those methods inside one another.
I'll take an example from the callbackhell website that was linked previously:
document.querySelector('form').onsubmit = formSubmit
function formSubmit (submitEvent) {
var name = document.querySelector('input').value
request({
uri: "http://example.com/upload",
body: name,
method: "POST"
}, postResponse)
}
function postResponse (err, response, body) {
var statusMessage = document.querySelector('.status')
if (err) return statusMessage.value = err
statusMessage.value = body
}
What happens when I want to do something other than `postResponse` when `formSubmit` is finished? I would have to start thinking about using partial application and providing specific callbacks, which is okay, but what if the callback levels are multiples deep? The code still starts to get muddy really quickly.
With promises / async, the asynchronous code doesn't need to know anything about what functions want to run afterwards, so the concerns stay completely separated.
Exactly. You need to keep really good control over your code structure to avoid callback hell. And again at one point you will end up in a situation where you will need to declare functions inside a main function, etc.
That's why I noted that I've never seen a callback implementation that is more readable than promises.
And there's just about never a reason to use the `Promise` constructor in real world code. If you already have a promise as in your example, so you can rewrite your refactoring as:
> And there's just about never a reason to use the `Promise` constructor in real world code
Why not? If I want to wrap a callback function in a promised sequence, I don't see a way around it. Please don't suggest importing bluebird/promisifyAll because that only teaches me how to avoid thinking about the problem, not how it's actually solved, let alone why it's a problem to begin with.
function promiseMeFoo() {
return new Promise((resolve, reject) => {
asynchronousFoo((err, out) => {
if (err) return reject(err);
return resolve(out)
});
});
}
^ aside from just rearranging the code for legibility, how can I do the same thing without constructing a promise explicitly?
Bluebird's `promisifyAll()` is just a way to generalize that pattern, so that you don't need to write it yourself. And if you care about performance it's substantially faster too. If you want to learn about the pattern then fine, but personally I don't want to write that for every single `fs` function I use in a project, it's much more convenient to just `Bluebird.promisifyAll(require('fs'))`.
I noticed this as well. On the other hand, even though the example the author provided could be substantially simplified as you showed, if you wanted to later on refactor the `getUsers` function again to add more logic, your promises code would end up more and more contorted. In practice I've found `await` and `async` to hugely simplify these kinds of complex promises.
I have the opposite experience of that. Generally function-based promise code is just about the same as async/await. It does take some time to get used to all the equivalencies between imperative code and promises though.
AKA the explicit construction anti-pattern: stackoverflow.com/questions/23803743/what-is-the-explicit-promise-construction-antipattern-and-how-do-i-avoid-it/23803744
The "problems with callbacks" section of this is loaded with FUD. I mean, the actual problems they list are almost all minor or easily solvable.
1. Repetitive error-handling, because you usually just want to pass it along. Maybe. Or maybe not. Forcing you to actually think about how you want to handle each error might be too cautious, but it's defensible -- especially in the context that async stuff is generally I/O related, so it's more likely than regular method invocation to need a close or a cleanup or a rollback in an error case.
2. The possibility for introducing error by forgetting to return from an error-handling callback pass-along. This is trivially catchable with a linter. (Eslint has a callback-return rule for this purpose.)
3. The idea that callbacks might be called multiple times. When that happens, it's not some weird case where maybe your callback needs to be prepared for multiple invocations, it's just a bug in the function you're calling. It should never happen with solid library code.
4. That the error parameter of a callback might be any kind of falsey value. This is true, but not a problem. Don't test for null or false, test for falseyness.
As much as people talk about "callback hell," it's never clear to me what problems they actually think they're having. It seems that as often as not, it's just a visceral dislike of indentation.
Most examples of callback hell on blogs and forum comments are just linear pyramids where each callback neatly waterfalls into the next. They are manageable but ugly which I think is why it's common to dismiss promises/await as cosmetic.
In practice, callbacks become a problem when you need conditional async logic. Like if you have your simple callback pyramid but want to fork the chain at some level and merge it back at the end.
What would've been a basic if-condition with promises/await is often an invasive change with callbacks where your only tool is to extract and rewrite logic into helper functions. A simple conceptual change easily blossoms into heavy indirection where taming the complexity becomes an engineering feat.
At the end of the day, we all have different appetites for trade-offs.
That's definitely a better example, yeah. I think that it's a very manageable problem -- if your functions are small and compositional in the first place, you're not going to be refactoring your 100-line magnum opus in a major way when you introduce a new conditional bit -- but that's definitely at least a problem that rings true to me in the way that so many "callback hell" complaints don't.
>As much as people talk about "callback hell," it's never clear to me what problems they actually think they're having. It seems that as often as not, it's just a visceral dislike of indentation.
It's surprisingly hard to articulate the problems. It's one of those things that you can experience but hard to explain. I tried to explain it[1] by going through the example of implement a vending machine[2] in various concurrency patterns in JS. Starting with callbacks and ending with async/await. I still don't think I did a great job.
I do sort of wish we had something better than promises. Errors can be swallowed and there's no way to cancel promises which makes sense, but is also problematic when promise based APIs like Fetch exist. You also can't indicate progress in a promise, which makes them a poor choice for many asynchronous processes (unless you attach a callback as well.)
I like the idea of Observables though. If they could work implicitly with async/await I'd be sold.
Fetch will take a cancellation token - it's just taking a while but work is being done. A lot of discussions are happening since people are scared (rightfully so) about making a new API bad from the start.
The "errors are swallowed" is also FUD, see my comment below about `unhandledRejection` and `unhandledrejection`.
Don't get me wrong, I use async/await almost exclusively these days - but I've run into these problems in large real world applications, and they can be hard to fix.
This isn't FUD if the main implementation is "broken".
With regards to errors being swallowed, I'm not talking about unhandled rejections, I'm talking about code that mishandles rejections and may swallow one intentionally, or code that doesn't pass along errors properly.
Wow, can i just ignore the article for a second, and say that i love the Markdown.. meta thing going on? That is to say, the Markdown is totally visible, unchanged, but still rendered as bold/header/etc.
I'm going to have to use that i think. Looks great!
I agree, it looks great. Funnily enough I didn't even notice until you mentioned it.
However, I have to pedantically point out that the Markdown is not 100% visible. The author made the correct choice and just showed links as links instead of the raw Markdown syntax.
I love async/await, it's the future, but dear god debugging transpiled async/await code is a nightmare (and I'm no stranger to JS transpilers, been doing it since 2007)
I'm not sure if Babel/Webpack doesn't support source maps with async/await, or if I don't have it configured right, but if someone has gotten it working please let me know how.
Activate the async-to-generator plug-in during the development, it is a simpler transformation so have a massively better source map support, but still have some difficulties to place some breakpoints. Sadly uglify still doesn't support generators so we can't use it in production mode.
Is multi-catch or typed catch statements anywhere on the ES roadmap? These are really convenient, essentially you do:
DB.insert(obj).then(result => {
...
}).catch(UnauthorizedError, e => {
// Will end up here if auth failed
}).catch(UniqueConstraintError, e => {
// Will end up here if there's a conflict in the DB
}).catch(e => {
// Generic catch-all
});
The above is at least available in the Bluebird promise API and greatly improves code organization and readability in my opinion.
I wrote a module that provides something similar to this using ES6 generators (which are already widely supported), but also supports callbacks in addition to promises. This way, you can make calls to existing APIs without having to wrap them to return Promises.
co is similar to watt, but the main difference is that it only works with Promises. It requires that you convert callback functions to return Promises before you can call them, and it does not let you wrap generators with a callback API.
I'll plug "scopes & closures" and "this & object prototypes" as well. Fairly quick reads and you'll understand JS better than your average developer afterwards.
Kyle Simpson is crazy. Read the first third of any of his books and then put them down before you get to the part where he goes off the rails. The "Scopes and Closures" one is particularly ridiculous... He goes off on a tangent about how let bindings should be written with a specific syntax and don't worry he's written a transpiler for it so you don't have to bother using the standard.
He can get preachy about his preferred (subjective) method of solving some problems, but you can't deny he does a good job of explaining the fundamentals in an easy to understand manner. That is, it doesn't feel like reading a textbook.
Bottom line, the books have a lot value in them for the time investment it takes to read them.
How do people handle shutdown of applications while async/await stuff is still executing? I do a lot of desktop app and need to stop executing when it closes. With the async stuff it's really hard to tell what's still in the queue to be executed and should be waited for.
Is the whole pattern more for server stuff where you can assume there will be no shutdowns?
If you're on OSX/iOS you would use an NSOperation and NSOperationQueue.
Your NSOperation objects would conform to the NSCoding protocol. Then you would iterate through NSOperationQueue.operations and use NSKeyedArchiver to serialize.
The first example is terrible and the second two are good. Is that last one really worth adding yet another language feature and two more keywords?
This is the sad thing about big committees with lots of corporations on them: everyone "representative" wants to report back to HQ that they had "impact" on the future standard, they want their fingerprints in there in some way.
That's how standards grow warts and features and keywords and competing module systems until they're a hairball.
Here's what John Resig, creator of JQuery had to say about it. It's short and funny and sad
I don't really like await. It seems to be a syntactical fix which acts in a quite magical way. Not sure if introducing magic to a language syntax is what we want. Rather, we should think about data flow of our async apps differently. I believe Observables are the future of async code. Once you grasp the concept that everything in your app can be transformed into streams of events, and you work with these events using chains of .map(), .filter(), .reduce(), etc. you'll grasp the true power of this paradigm and how well it fits async and/or concurrent apps.
I suggest anybody that doesn't quite swallow the await syntax, to have a look at this presentation about Observables at Netflix as it's quite mind changing: https://www.youtube.com/watch?v=XRYN2xt11Ek
>I don't really like await. It seems to be a syntactical fix which acts in a quite magical way. Not sure if introducing magic to a language syntax is what we want.
You mean like having "for", "if/else", "functions", etc, instead of just "load" to some address and "jump" to an instruction?
There's nothing magical about async/await.
The main people I know that like programming with callbacks directly is people who only first learned async programming in JS (and thus don't know the relevant PL history).
await feels more magical than other language constructs because, at least to me, async constructs are much more complicated than synch ones (if/else, functions, for, etc). A quite important dimension that this await/async syntax adds is time. Now the programmer is forced to think about time, even though the code is meant to read sequentially.
For instance what is `return await asynchronousOperation(val);` meant to return? You'd think that a return always returns something immediately - however when reading this code your mind has to process also the await keyword and think about the extra time dimension.
With a callback you have clearer separation of concerns - you know you're passing the behavior (the function) as an argument that will be executed at some other moment in time.
This async/await stuff seems like a fix for "ok JS code is becoming callback hell, how do we fix it?", rather than thinking about what async programming actually means and how we can reason about it more naturally - for instance I don't want to code async stuff and have it look like it's sync, because well, they're different concepts!
>for instance I don't want to code async stuff and have it look like it's sync, because well, they're different concepts!
They shouldn't be though.
In the ideal case, code should be able to function 100% whether it's a sync or async operation -- kind of how promises (and async with promises) works. The async-ness should could just be a deployment/configuration detail.
Isn't it ironic that for a language that has new frameworks popping up all the time, the guts of it evolve so slowly?
I primarily work in js and c# so I can't speak for other languages but async/await in c# is incredibly useful and has been around for years. Js feels like it lags a long way behind.
I think it's more or less obvious that languages and platforms ran by a benevolent dictatorship can move much faster than one ran by a committee of several different groups and companies.
This is especially true of JavaScript, where maintaining backwards compatibility is really important [1] -- you have to avoid doing something you can't change later.
They also tend to wait until there's a few implementations of any proposal already. It has to be tested first, before becoming standard.
I've been using async / await since babel started out, and before that I was using co with generators.
While I do agree that it's a big improvement, it sucks to see people using try/catch for flow control. This is really a problem with Promises in general: you can't easily differentiate between exceptional behavior and failures.
After playing around with Elixir for some smaller projects, I've really been enjoying their approach: you return a tuple with :ok and the result or :error and the result. And they ALSO have exceptions, but they're not used for flow control, you only get exceptions for truly exceptional behavior.
Of course, it's nice to get this sort of native support for trampolining, I just wanted to be the killjoy who pointed out that as of ES6 you can already write the same thing in a slightly different way, with functions that only go up to maybe 30 SLOC or so each and therefore don't kill your codebase.
Although async and await do look great, I am not very keen on using try/catch for everything. Is this not going to end up with more verbose code? I appreciate it will be impossible to ignore errors though.
In my experience async/await style code is much less verbose than callback style code. Like 50% less. Async/await style is also much easier to follow, especially for non-node developers, since it makes asynchronous code appear as if it is synchronous.
That said I like to use the caolan/async library when dealing with async callbacks and that adds a ton of verbiage overhead in exchange for a more understandable flow of the data (in my opinion).
The async.js library has awful semantics - promises are an improvement in every way compared to it, whether it is reasons of flow, readability, and even perf (bluebird is quite faster).
The catchall try-catch is my major dislike for async-await though. The only way to exit the async-await loop being to throw is overkill, and makes it more complicated to localize different desired flow branching.
I'd love for syntactic sugar to mark a function, arrow or otherwise, as returning a promise. Reject and resolve could be standard, hidden, context-specific variables, just like arguments and this.
> Not only is it [linearly written code with async/await] easier to read (as the chaining example was), but now the error handling behavior is the exact same as regular synchronous JavaScript code.
Then why not write code synchronously in the first place?
I understand that async/await looks attractive in languages with an event loop model, such as Javascript, or with expensive threads, such as C#.
I see two main justifications for async/await in these languages:
1. Linear code.
2. More performant code.
It's easier to follow a linear flow of code than it is to follow entangled callbacks and state machines.
Async/await is also sometimes perceived to help with performance. This is an observation that is typically made in languages with few, heavy threads, such as C#. As async/await code "doesn't block", expensive thread resources are mostly free to execute concurrent async/await code.
It seems to me that async/await provides very little apart from making code flow over callbacks look like regular, synchronous, structured, procedural code.
It introduces new concepts: Now there are two types of function calls: vanilla "blocking" function calls and the new async function calls. The difference is that one "blocks" or "takes a long time" whereas the other does not.
Some APIs are only available in one flavor. How do I adapt between the two conventions in code that is required to deal with both API types?
C# and the .NET framework mostly choose to completely double API surface - for example you now have a System.IO.Stream with both vanilla and async/await Read()/Write() etc. methods. You can only hope that they're implemented in a consistent fashion. (they're not)
Given all these conceptual and practical disadvantages I don't understand how async/await becomes such a praised thing.
I'm in favor of coroutines/green threads/goroutines (depending on whether you learned programming in the 1970ies, 1990ies or 2010ies ;)) with a scheduler that multiplexes I/O to the appropriate OS concepts for concurrency. With this approach you get the benefits of structured, procedural programming without the mental and practical disadvantages of async/await.
"callback hell" is just a sign that you did not split your code into small functional chunks, nothing more. promises/async/await might have its place, but "callback hell" is developer's fault, who can't shake off linear coding past, not language's.
I am not talking about hiding anything. If anything, promises/async/await IS hiding.
When I see "callback hell mumbo-jumbo" - it just takes some time to untangle it, assign proper context to each function, and flatten out it inside proper object to keep that context in. And instead of anonymous functions in callbacks you just use object methods.
Bonus: you got something that is so easy to unit test.
I see a lot of callback-hellish code, but it comes from people who's experience was in python/java/php/etc, where typically you write linear code (threads are also linear code in essence) And in JS you have to think about frames of code - this is a bit different, but it is not a bug, it is a feature.
After 2 years of FP in Scala, I'm amazed to see at what lengths OOP engineers go to circumvent learning what monadic composition is (including reinventing it).
Go ahead guys, in another 6 months you'll reinvent the for comprehension! ;)
You forgot about time dimension, it's a big deal.
Take a look at RxScala -- they wouldn't go implementing it if it was already included in Scala :)
But yeah, Rx makes handling streams of events kinda similar to handling collections of objects, which is cool. Except for time dimension (`.debounce()`, `.throttle()` etc) which, as I said, is a big deal.
For people who know both systems well, how does this compare with Erlang? The await thing superficially looks a lot like Erlang waiting for a message from another process.
Await typically uses a CPS transform to rewrite methods into continuations. Effectively the remainder of the method after the await is passed as a callback to the async operation. The async operation can resume when it completes.
Erlang's waiting will interact with the VM's scheduler, and pause the green process until a message is available. Logically though, there's little difference between a stack treated as data by a scheduler, and the closure captured by the continuation after the CPS induced by await. The chief practical difference is a continuation could be called more than once. Resuming a green process with the same stack location more than once won't work very well because the stack gets reused. But you could get around that by storing the stack not as a reused vector, but as a linked list of heap-allocated activation records (an exact parallel to nested closures, which happens in anything beyond the most trivial CPS transform). Erlang wouldn't do that because (a) it's slower and (b) it's not necessary since it can't continue more than once.
(I implemented anonymous methods in the Delphi compiler and got intimately familiar with lots of these details.)
Generators are implemented with continuations, just like C#'s iterators (yield return statement). In C#'s case, it rewrites the function into a continuation via a state machine, but it's just using an integer to point at the remainder of the function, just like an index into an array isn't very different to a pointer - it's a distinction without much of a difference.
(JS VMs are free to implement generators however they like, of course.)
While async and await is a welcome change, callback hell is mostly a symptom of the programmer's own doing. It is possible to have callback hell in any async programming stack/framework.
In C++, you usually don't write anonymous functions and lambdas at will. You make proper named functions out of them and write unit tests even. Same applies to js. It's really just a matter of discipline and having a culture of writing manageable code.
If someone is curious how this can be done, please post a link here and I can help clean it up.
I've got a classic callback hell type situation that just cropped up in one of my applications. We are using an image library to change an uploaded file into a background image for the header.
I've toyed with separating each stage into sections, but I still need a callback to return at the end of it. Which seems to necessitate an anonymous function every step of the way.
function resizeImage(req, res, next) {
lwip.open(req.file.buffer, 'jpg', function (err, img) {
if (err) return callback(err);
var image = new Image(img);
let ratio = (image.width() > 960 ? (960 / image.width()) : 1);
async.series([
image.scale.bind(image, ratio),
image.crop.bind(image, image.width(), Math.min((image.width() / 2), image.height())),
image.write.bind(image, req.params['id'], "TITLE_IMAGE", buffer, "image/jpeg")
], next);
});
}
router.post('/:id/title-image', function (req, res, next) {
resizeImage(req, res, function (err) {
if (err) res.sendStatus(400);
res.sendStatus(200);
});
});
This is just my "way" of writing things:
1. I write with 4 spaces because it lets me know when I should refactor my code into something more readable.
2. I usually move route handlers to it's own file. So it will read like
```
router.post('/:id/title-image', routes.resizeImage);
```
The idea is that routes code simply goes through the model code and send the right http status codes.
3. async is your friend :-) In general, I try to model my model APIs with async in mind.
I can give it a better shot later today if you had any comments on my first try.
I'm not sure async is my new favourite library. It seems to obfuscate a lot of behaviour out of my hands, then it returns the result of each function as an array. During processing it doesn't remain clear to me what the value of `image` is. It appears like the crop operation is being applied to the original image rather than the scaled one.
That said - it does seem to work?
Another alternative, since I am calculating the dimensions of the image to start out. Something that is easier to perceive once the code is a bit more organised like this. Is to use the library's built in `batch` functionality.
It wasn't possible because I didn't have the image dimensions up front. But now I do. So I used that approach instead of async in `try2.js`. This refactor doesn't help in situations where the result of the previous callback is important.
Would a Promise be the best fix then? Promises seem to do the same thing as async but in a more javascripty way... hey, you said no need.
For a good starting point for anyone who doesn't know how to handle callback hell without promises or transpiled features, this is a must read: http://callbackhell.com/.
Are you saying "stop using NodeJS with Javascript", use a third party language that compiles to Javascript ? What's the point using NodeJS at first place ? Better use Haskell directly...
Oh man, I can't say enough good things about PureScript!
One really neat thing is since PureScript basically copies Haskell's type system, if you learn PureScript you're learning to use one of the best type systems in industry.
Weird edge cases with Chrome are surprisingly common in my experience. We just relaunched our site after a major rebrand and one user contacted us throwing a huge fit that our site was terrible and every blog page made Chrome crash. This was only one user out of 20,000 uniques according to GA, and I'm sure if we had a widespread problem we would have heard about it from more than one person.
I've also seen really weird cases of Chrome incorrectly caching stuff, one time it broke web fonts for a few days (within the last couple of years, no less), etc.
Chrome updates a lot and to me it seems the stick pretty closely to the "go fast and break things" motto.
In my case I think I must be stuck in an old version (42) by Corporate IT. But to crash chrome 42, you have to use some nasty javascript or exploit a vulnerability. I haven't seen the website (for obvious reasons) but it would have to be a pretty sophisticated website to justify this kind of use of javascript. But from the title it looks like just a blog article....
One of the most confusing things with Promises vs Observables or any sort of deferred value is whether it is "hot" or "cold".
Most deferred objects like Futures in Java and Promises in other languages are hot which basically means something is already running in the background.
There are many reasons why I prefer the cold model but mainly boils down to be more explicit, better composition, backpressure handling, and overall better predictability.
I wish articles talked about this a little more of hot vs cold.