It's not just line 18, I barely have any idea what any lines are doing. Single letter variable names are almost always a bad idea except in one case, using `i` as an index, and even then I don't do that any more because `index` is more obvious.
after the web worker executes its function, it post a message to the original context with 3 arguments - the call id (c), error string(e), the successful result object(d).
the greenlet function when invoked returns a promise waiting to be resolved or rejected. the resolver and rejector functions are stored in object p with the key with a unique call id.
so line 18 `p[c][e?1:0](e||d);` basically means resolve or reject the promise based on the parameters from the web worker message by invoking either the resolve or reject function stored by its call id.
here's my quick attempt to understand and annotate this lib.
I haven't touched Javascript for a long time, especially using shiny features in ES5, ES6... those arrows always throws me off. Is this a typical JS & NodeJS style nowadays? Does anyone ever feel your code and your co-workers' code are barely readable?
Not the parent, but I find function vs fat arrow to be similar to:
* def vs lambda in Python
* fn vs closures in Rust
* def vs anonymous functions in Elixir
Honestly, given how often I find myself using small, immediately-consumed functions when performing common filter/map/reduce operations I find fat arrows to be an extremely welcome addition. I feel that it spares a lot of excessive "function" keywords everywhere and gives you just a little more room to have more verbose variable names when you have to deal with things like enforced maximum line lengths...
I write JavaScript/Node everyday. In my experience one does not write code like this when working with a team or a project that will be handed off; the author(s) are trying to minimize file size with expressions that are as small as possible. This seems okay to me for a library where your API is simply a function wrapper.
The syntactic complexity of line 18 isn’t really the issue (in fact, it shouldn’t really require a deep understanding of JS syntax), it’s the meaningless single-letter variable names.
> ️ Caveat: the function you pass cannot rely on its surrounding scope, since it is executed in an isolated context.
That's a pretty major caveat and makes the project's tagline basically false. It's just a hacky way to spawn a Web Worker on the fly by evaluating a string that was made by coercing a local function declaration into a string.
Not in this case, because copying data to and from a web-worker uses "Structured Copy" which is about as fast as JSON unmarshaling, but in this case it's actually doing significantly more work:
* read request
* parse from JSON to an object
* read the data from the object in the worker to the main thread
* create a new object in the main thread with the data from the worker
Those last 2 steps are about as slow as a JSON.stringify and JSON.parse, and are completely unnecessary. As others have said, adding some filtering to the example makes this example worlds better.
That means that any I/O doens't block the main thread, and other parts of JS can execute.
This example provides literally no benefits, and actually is a performance hit from converting the data from JSON, then sending it back to the main thread (which itself copies the data in a method similar to converting back to JSON then back to an object again in the main thread).
It's true, the example could be better. However - adding any form of data pruning to the example would immediately show the benefit. JSON parsing happens in the worker, and only a small subset of data is actually serialized and sent back to the main thread.
Yeah, that would be a great way of handling it. The other would be to parse out the data and insert it into the IndexedDB and then in the main thread only pull out what is needed
(I had to do exactly that for a B2B app that was receiving hundreds of MB of data a while back)
This is off-topic, but at the top of the readme is this:
> The name is somewhat of a poor choice, but it was available on npm.
npm has supported scoped packages [0] for years now, and it's a fantastic solution to this problem, as well as solving many others (like typosquatting in many cases).
I know this package is already named, but I really urge people to use scoped packages more.
I don't know of any offical docs or anything, but I'd imagine it's because it would be a pretty big breaking change. (before babel moved to scoped packages, it would be jarring to see `babel-core` unscoped, but `@babel/babel-plugin-thing` as scoped because it was forced. I also think there is still some roughness around "transferring" scoped packages (what happens when another user takes over a package I authored? does it change to `@OtherUser/coolThing` or does it stay under `@Klathmon/coolThing` forever? I wouldn't like that second one.)
What I would like to see is all new unscoped packages being able to be installed/required by their scoped version by default. So if I create a package `coolThing` in npm entirely unscoped, it would be installable by doing `npm -i @Klathmon/coolThing` and by doing `npm -i coolThing`
That would let many of us to use the benefits of scoped packages without the package authors having to do anything. And it would be a big step toward going completely scoped at some point in the future.
All new packages must be scoped. Existing packages can - and shoudl - be aliased by the owner to a scoped package. Everyone's existing code continues to work, while all new code is safe.
Yes, but many of them started that way, they didn't have to go through the headache of transferring to that format. And even ecosystems that did go through that trouble, they weren't nearly at the scale that npm is.
We can talk about how awful of an idea it was for npm to start the way they did all we want, but the reality is that they are in the position of unscoped packages being the "default" right now, and I'd love for them to get away from them in a safe manner.
It does at least partially solve the problem of "typosquatting" where installing `anglar` could potentially be a malicious version. It makes it more obvious when it's happening (oh, you setup the `anglar` and created a ton of typo'd packages with a few lines changed each? That's probably safe to ban...)
It also allows those orgs to group "sibling" or sub-packages under their main name. So I know that `@angular/cool-angular-plugin` is actually from angular.
I get the benefits of namespacing. I'm just not sure making it mandatory solves any issues. For example, what's stopping someone from publishing `@developit/greenlet` or `@greenlet/greenlet` without owning the respective org/username in github?
Each of those organizations has a number of other projects alongside the main ones. This seems like a good outcome since it allows them to easily refactor things into separate repos or manage support tools rather than having inertial pressure making it easier to lump things into a single main project.
On github, if I expect an open source thing to get some use, I usually put it in an 'organization', instead of my personal github account. To allow it to later be transitioned to a team, or another maintainer, without disruption.
If one does the same on npm, it just transfers the namespace scarcity to 'scopes' themselves, with no real benefit.
Btw it's better to email us (hn@ycombinator.com) than to try to get our attention this way, since we don't come close to seeing all the comments, or even all the threads.
greenlet is a bit like "workerize-lite" in the sense that it allows you to move individual functions to their own thread.
Also relevant: a discussion with Jason yesterday about when it's worth moving functions to a web worker: https://twitter.com/mxstbr/status/957312201987063808
https://github.com/developit/workerize