Hacker News new | past | comments | ask | show | jobs | submit login
Temporary fork enables Node.js to optionally use the Chakra JavaScript engine (github.com/microsoft)
170 points by chcokr on May 12, 2015 | hide | past | favorite | 97 comments



"This temporary fork enables Node.js to optionally use the Chakra JavaScript engine on Windows 10, allowing Node.js to run on Windows on ARM." (the submission title has been updated after I posted this, was initially "MS releases a fork of Node that uses the Chakra JavaScript engine instead of V8")

Looks like they intend to merge back with node mainline... ?

EDIT: Found this: http://blogs.windows.com/buildingapps/2015/05/12/bringing-no...

They're doing this to be able to run Node.js apps on Windows 10 ARM (on which V8 supposedly doesn't run?)

"We will be submitting a pull request to Node.js after stabilizing this code, fixing key gaps and responding to early community feedback."

"Going forward, we plan to work closely with the Node Foundation, the Node.js Technical Committee(s), IO.js contributors and the community to discuss and participate in conversations around creating JavaScript engine agnostic hosting APIs for Node.js, which provide developers a choice of JavaScript engine that they would want to use in their Node.js workflow"

Looks like the pull request will consist mostly of exposing new hooks to integrate with Chakra / other JS engines and won't involve pulling any Chakra code into Node.js (which would be unlikely to be merged). Might lead to a SpiderMonkey version of Node.js at some point, too. Nice to see IO.js mentioned. Looks like a very positive initiative (assuming it doesn't complicate Node core too much)


> Might lead to a SpiderMonkey version of Node.js at some point, too.

I worked on that 4 years ago :) <http://zpao.com/posts/about-that-hybrid-v8monkey-engine/>. The Node community at the time wasn't a huge fan, though it's effectively the same thing that MS just did (build a minimal V8 API shim on top of another JS engine). I guess everybody is ok with a little fragmentation now. Our intention was also to try to get this upstreamed, however with low interest and other things to do, we didn't follow through.

I'm excited to see this, and especially to have the MS folks involved with the TC. I'd love to see an engine-agnostic API but realistically I don't think it'll happen, at least not anytime soon. Right now Node itself definitely relies pretty heavily on the V8 APIs. Those APIs can be abstracted for the most part (even if each engine is just shimming those parts of the V8 API) but the other problem is the longer tail of binary npm modules. Right now they have the full V8 API to work with. If they do a shim layer then it will come at cost for every vendor except V8. And then maintaining that layer as V8 changes APIs. If you go the engine-agnostic API route, then you will need coordination between engine vendors. That opens the doors to a multitude of problems.


Also that's going to potentially add a ton of work for people developing libraries. With a single engine behind it, the coding efforts can be focussed on writing and profiling something to be fast on v8. If you get replaceable engines, now developers have to either: 1) Ignore all but V8 (or spidermonkey, or chakra etc.) 2) Create different versions of their libraries for different engines. ( 3ish) write multiple paths in their code depending on the target engine)

I'm not sure exactly where I sit on this one. In many regards I think I like it, allowing developers to use the right engine for the right job, for example. Each engine has its own strengths and it may be that v8 isn't the engine for you / your workload. It's just this could hurt as much as it helps.


Using @zpao's example, most Python developers use CPython and either don't know about PyPy or have never used it. As such, I'd expect most of the long-tail of Python libraries to have been written and tested against CPython.

That hasn't prevented PyPy, IronPython, JPython, etc. from existing/thriving.


Having a decent shim to code against instead of having to deal with the internal gooiness of different JS engines is actually very much preferred. By throwing a shim over the JS execution layer, they actually make the life of extension developers much easier.


> They're doing this to be able to run Node.js apps on Windows 10 ARM (on which V8 supposedly doesn't run?)

V8 definitely has an ARM runtime, so maybe this is a result of the restrictions on what's allowed to run on the platform? (e.g. iOS and Windows Phone don't allow JIT compilers except the ones provided by the platform itself)


I'd guess, at the very least, that it doesn't support the Win/ARM ABI.


The issue is really what has been eliminated from the API. This is one of the ways they prevent you from generating your own executable code. See, e.g. this (closed) V8 bug for adding Windows Phone support: https://code.google.com/p/v8/issues/detail?id=2427


Right, this is true for WP and WinRT; it's not clear that the same is true for the IoT Win10/ARM builds?


I wonder how difficult it will be to manage pull requests for merging against node.js & io.js simultaneously - also, creates an interesting political situation if one project accepts the PR but the other doesn't (I would guess io.js will be more eager to do a release against the new code)


The question is why should node/io.js take such a pull request. It adds support for an irrelevant platform and would add the burden of maintaining the 'wrapper' which will probably break on every v8 update.


If it adds an engine abstraction layer which would allow users to choose an engine (and somebody implements SpiderMonkey support there), sure why not.

Though this is a job that is already done by JXCore.


It would be cool if the JavaScript engine was interchangable in NodeJS and IO.JS so you could pick Chakra, V8 or SpiderMonkey very easily. SpiderMonkey is faster than V8 these days on a lot of benchmarks.


For anyone who wants to run Spidermonkey with Node, JXcore is a Node fork that enables this.


I can attest to that.

I was experimenting a couple of days ago with high duty and extreme DOM nodes crunching and FF's SpiderMoneky blew Chrome's V8 out of the water for 7 - 9 multiples gain in performance measured in time elapsed to complete the operations.

Chrome's V8 engine at this point is so overrated


How do you live in such a bliss ignorance where supposedly an order of magnitude difference in performance between two state of the art browsers doesn't make you reconsider even for a second that you might have not written a working benchmark? :)

Sorry but unless you have deep knowledge of how both engines and browsers work (knowing how `appendChild` is actually implemented for starters), you simply cannot write a working benchmark. Even then, it's very hard and tedious.

If you don't have time to obtain such expertise, you could take a shortcut and compare realistic end-to-end benchmark. E.g. if your game runs at 210-270 fps in firefox but only at 30 fps in chrome, then you could claim that "firefox blows chrome out of the water".

It's very easy (just look at 80%+ of jsperfs) to construct benchmarks that don't look completely broken to the untrained eye but actually are. The common theme is the benchmark missing many aspects of realistic code and being reduced to measuring irrelevant optimizing compiling features. For example the benchmark could only be measuring how thorough the engine's dead code elimination pass is even though what you wanted to benchmark is string concatenation performance.


> I was experimenting a couple of days ago with high duty and extreme DOM nodes crunching and FF's SpiderMoneky blew Chrome's V8 out of the water for 7 - 9 multiples gain in performance measured in time elapsed to complete the operations.

This likely has nothing to do with the JS engines themselves and everything to do with the browser they were running in. To actually benchmark something like that you'd need to simulate the dom with something like https://github.com/tmpvar/jsdom


Are you suggesting that the remarkable disparity in performance was DOM specific?

Strange because I used a very common method appendChild() and I was under the impression that both browsers had optimized their respective inner workings a long time ago to the point that we should not notice such divergence in performance.


Yes. DOM manipulation in all major browsers is implemented in C/C++. The JS engine is just a wrapper; any noticeable performance difference in DOM manipulation is almost certainly due to differences in the underlying layout engine and not in the JS engine.


Well, there's one way in which the JS engine effects it: how efficiently one can call into C++ from JS. Mozilla have done a lot of work to reduce the cost of that in SpiderMonkey.


Sure, though in the grand scheme of things that penalty is pretty small when compared to the DOM operation itself. Eg. doing an appendChild() on an attached element and causing a reflow.


It depends a lot on what you're doing — if you're hitting fast-paths (esp. if you're dealing with out-of-tree nodes) it's entirely possible to end up with the JS/C++ trampoline being a significant part of the bottleneck, for much the same reasons as why Array.prototype.reduce can in several implementations.


This reinforce my feeling that most of the issues with Firefox is the GUI framework.


Given node.js supports native code modules I can't see why it'd matter, but SpiderMonkey has a special asm.js compilation unit, OdinMonkey, giving best-in-class asm.js speed.


and chakra is currently faster than both on the octane and jet stream benchmarks


Interesting that Microsoft forked Node instead of io.js. The Microsoft repo says, "This branch is 16 commits ahead, 29 commits behind joyent:master".


I work for MS, right now we're working on win10. Some of the UI is written in html/js now, so I'm not surprised by this at all. I'm guessing we'll see some native node.js apps on windows in the future.


I think what the OP meant was that it is interesting that MS forked Node, and not the io.js project that's more advanced than Node.

IMO it's not too surprising - it's relatively simple to fast forward to io.js from where it is now, wheras reverse engineering it backwards to Node compatibility would be mayhem.


Can I browse my node_modules folder in Explorer yet? [1]

[1] https://github.com/joyent/node/issues/6960


Wow, what's up Microsoft?


Actually, I was thinking, what's up node? Each dependency keeps a private copy of its dependencies? How messed up is that? Or am I just reading that wrong?


That's a feature, not a bug. It lets A rely on version 0.9 of X while B relies on version 0.8.


I don't call that a feature. There's other ways of doing version pinning without junking up my filesystem.


Regardless of who is right or wrong at the end of the day it's still broken for many people. If Microsoft and Joyent/the node & npm community care about people using node & npm on Windows then the issues needs to be resolved, period.



I don't see how this npm's problem - the language itself doesn't let you catch this problem - and the pattern itself is dangerous. I'd have to agree with IsaacSchlueter here.

If I have a library (libA) that uses fooV1 and another (libB) that uses fooV2, there is no reason I should expect those two dependencies to interop with each other (except explicitly stated by the developer).

The solution (that there should be one set of dependencies) is an even worse problem that currently afflicts the Golang community, which they have decided to solve the same way npm has decided to solve it - vendoring.

Now with Golang, if you only have 1 set of dependancies, and one of your libraries depended on an older version - your program just won't compile (unless you update the library to use fooV2) and I'm not sure how thats useful to anyone. Most people will just tell you should have have just versioned that dependency.


In the vast majority of cases, its not. But in the cases when people want to invent new globally useful abstractions, like bignum, promises, cleaner streams etc - i.e. to reshape the platform - they can't do that. For example, the overhead of adding bluebird as a dependency to every module quickly adds up, and bluebird had to fight with many issues stemming from multiple versions of it communicating with each other.

IMO while this has contributed to the growth of node the ecosystem, it also stifled the growth of node the platform.


No, it's a "bug". It is not required for loading specific versions -- those can be kept in a flat structure just as well. No need for the idiotic nesting.


IMO a flat structure would just be a confused mess. There would be hundreds of packages in that directory.


I'd rather have hundred of packages in a well sorted flat directory than the same number of packages in an "Alice In Wonderland"-style rabbit hole.


Having node_modules/X@0.9/ and node_modules/X@0.8/ would allow it as well. Yet each dependency would not keep a private copy of its own dependencies.


Unless you create file paths that are so long that they break the default tools of the OS. That’s the point where you went too far.


Or the OS went too short, so to say. There are two problems here.

1) Windows' limit of 260 characters per path name is incredibly small for a modern OS and that was true many years ago as well.

2) Npm approach relied on well behaved OSes and file systems allowing long path names. It's not future proof because you don't know on which OS and file system you'll need to run, maybe some limited embedded device.

Ruby's rvm solved that problem with a single directory level and a gem@version naming scheme.


Exactly. Ruby’s solution will run even on FAT12 (which has only single-level directories), while npm isn’t really compatible.

In general, node does this quite often – their memory model also assumes they have infinite memory.


npm 3 will adjust dependencies to flatten these things out as much as possible, only creating nested dependencies when there's a version conflict between modules.


With every new version of Windows, Explorer gets worse and worse. I personally switched to Directory Opus long time ago.


This isn't a limitation of Explorer, but actually is a very well documented part of the Windows API. MAXPATH has always been 260 characters.

https://msdn.microsoft.com/en-us/library/aa365247(VS.85).asp...


How long will Microsoft wait till the fix MAXPATH and other limitations? (and various other Win32 limitations that are usually a legacy porting helper thing from Win16). There would have been a good point with the introduction of Win64API - but Microsoft forgot about it and was apparently busy with something else.


I doubt it's too high up on the list of priorities. It would require really careful work to work in a backwards compatible way (there are almost certainly a tonne of apps that expect <=260 character filenames).

I guess the main thing is that it's one of those 'who cares' problems. The only time I've ever seen this limitation being complained about, it's by people who've had problems with npm.

That directory structure is undoubtedly horrible and is not mirrored by any other piece of software that I've seen.


Meanwhile it's also a "who cares?" problem for the npm people. They don't use this OS, and don't expect ever to do so.

I think the node_modules directory structure is a novel, though straightforward, solution to the problem of interdependent module versioning. It's very Unix; it reminds me a bit of GNU stow. It makes perfect sense that it will be tweaked in the new npm to be less redundant, but it only really makes sense to tweak systems that already work perfectly. (Otherwise they should be fixed first, then tweaked.) Certainly it's better than having a separate LD_LIBRARY_PATH setting for every command invocation! (even that doesn't fix everything...)


It doesn't even seem like an obvious solution to the problem. Take Maven for instance: a global dependency repository under which the dependencies are stored in the form <groupId>/<artifactId>/<version>. If X depends on Y.1 and Z depends on Y.2, so what? You have all the dependencies stored on your filesystem in a relatively flat structure.


If I had coded in Java on Windows for years, I wouldn't trust my sense of what's "obvious". I'm not too impressed by a "global" repository either. Python struggled against that stupid architecture for years before they got virtualenv in good working order. Node just took a shortcut to the future.

"All direct dependencies are in the node_modules directory, full-stop" is a pretty simple rule. Really, that's the only rule, and that's all of it. You don't need to worry about second-order dependencies, because those are direct dependencies of some other module, which means... they are in that module's node_modules directory.


I had the problems with a Java codebase in SVN (back when your user profile was still under C:\Dokumente und Einstellungen\Username\...

Apart from that there is a backwards-compatible way of using longer paths, which is prefixing with \\?\. Since MAX_PATH is a hard-coded constant there can only be an opt-in way of dealing with the problem. Sadly many application or framework developers these days still don't opt in.

It also creates the problem that if Application A can create such paths and Application B cannot read them, you'll be annoyed too. And Application A might just disable long path support to mitigate the problem, leaving the whole state as it is.


> The only time I've ever seen this limitation being complained about, it's by people who've had problems with npm.

Recently that is where you'll have heard about it a lot, but people have been hitting the problem for years. I've hit it in a couple of distinct contexts in my time.

You hardly hear about it generally because people moan a bit then work around it so the problem goes away until next time, and sometimes people don't even moan about it because it is such an old problem they just think "oh, that again, better make my directory/file names and/or paths a bit shorter" and get on with their day.

The difference between this and with npm over recent times (causing the greater noise around the issue) is that twofold:

* linux/similar people hitting the problem in the wild for the first time (they've probably heard about it, but never had to deal with it) because their code is getting used on Windows which was much more a rare occurrence for them in the past, and being incredulous that such a problem exists in this decade

* Windows people wanting to use node and similar cool new tech on their preferred and/or mandated platform but hitting issues due to this problem in their environment (and in some cases being incredulous that someone wouldn't consider the implications of the limitation it in their design)


> I guess the main thing is that it's one of those 'who cares' problems. The only time I've ever seen this limitation being complained about, it's by people who've had problems with npm.

If you work on the sysadmin side of things, this comes up almost every time with directories on network shares. Explorer, cmd and PowerShell can't handle them and you need to resort to using robocopy or 3rd party tools like FastCopy. To delete stuff you often have to resort to hacks like robocopy mirroring an empty directory into the path before deleting it.


It is a limitation of Explorer and CMD.EXE - Windows has had Unicode APIs to access paths up to 32767 characters, but for some reason they have Explorer and CMD.EXE use the older ANSI API which does not support it.


> With every new version of Windows, Explorer gets worse and worse.

This criticism is a bit misplaced here, since the whole reason for this limitation is due to backward compatibility with older versions of Windows (and software written for older versions).

It's not like this is a new change in Windows; it's been there for ages.


JXcore is another fork of Node that makes the VM pluggable, supporting both V8 and SpiderMonkey. I wonder how similar Microsoft's and JXcore's VM abstraction layers are and whether Node upstream would accept them. Drawing a hard line between the Node native code and the VM would make binary addon compatibility more stable (and lessen the need for NaN, the "Native Abstractions for Node").


The problem with the way JXcore did the SpiderMonkey port is their extensive use of C++ macros - not unlike NAN for Node.js. This makes the code hard to debug and maintain. The Microsoft Chakra Node port is more elegant because they've mimicked the V8 C++ API making it much more likely that it will be merged into Node.js and IO.js. In time I suspect Mozilla and other javascript engines will make V8-compatible API shims similar to what Microsoft did:

https://github.com/Microsoft/node/tree/ch0.12.2/deps/chakras...


This was a long time coming. I remember some MS folks talking to Ryan Dahl about doing this back at nodeconf 2011.


I wonder if MS is going to end up open sourcing Edge? Between this and the fact that Visual Studio Code uses Chromium, it really seems like where MS should head, but who knows.


I'd expect them to open source various components of it before they open source the whole shebang. I.e., I'd expect to see them put Chakra out, maybe the browser chrome, the parsers, etc. before they put out all of edgehtml.dll. I could be wrong.

Edge (and particularly IE) are fairly heavily tied to the OS in a bunch of places. IE, for example, can do weird FTP and Windows Explorer stuff. The infamous "Internet Settings" dialog and the way IE deals with stuff like proxy servers is only sort-of part of IE. IE's network stack is largely dependant on the bits and pieces available in the OS below (consider IE11 can only use SPDY on Windows 8). I wouldn't be surprised if open sourcing the browser wholesale would start unraveling a lot of things that MS doesn't intend to be public.


I wonder if making it standalone will be a part of the whole 'ditching IE legacy' process?


Is there a benchmark comparison available anywhere?


What about the license of Node.js/Chromium? Isn't linking a closed source library (Chakra) problematic?

You know it includes multiple code parts under various licenses, Wikipedia says: BSD license, MIT License, LGPL, MS-PL and MPL/GPL/LGPL tri-licensed ( http://en.wikipedia.org/wiki/Chromium_(web_browser) )

There is a reason why major open source projects like Linux, etc. choose licenses like GNU GPL v2+. http://en.wikipedia.org/wiki/Embrace,_extend_and_extinguish and http://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt


It's already linked against tons of closed source libraries when built on windows, why would one more make a difference?


Based on that license list, I don't see how any of those would present a problem. Do you?


>Isn't linking a closed source library (Chakra) problematic?

Yeah, because the tech world didn't have enough problems with projects being immature, unreliable, stale 30+ year designs, abandoned, incompatible, not provided by a specific distribution, coflicting, patented and 100 other issues to consider.

It just had to also add 200 legal distinctions behind what you can and you cannot do, and how you can link stuff and under what circumstances.


If you're asking if software licenses are important, the answer is yes, they're very important.

To the GP, though, I don't immediately see how linking to Chakra in this way would be a license issue. The more important thing is the license information for Node, though, not Chromium: https://github.com/joyent/node/blob/master/LICENSE (some overlap but quite a bit that doesn't)


>If you're asking if software licenses are important, the answer is yes, they're very important.

I'm not asking about their importance, I complain about their existince (and need).


Will this work on Raspberry Pi?


Yes, this works on Raspberry Pi 2 and MinnowBoard MAX, running Windows 10 IOT Core preview - http://ms-iot.github.io/content/Downloads.htm



This tweet from May 1 might be related: https://twitter.com/conoro/status/594196246727774208


I'm betting raspberry pi 2 only (i.e. armv7 only) since that's all Windows IoT runs on. Doubt they'd put the effort into armv6 for all of the existing raspberry pi 1 devices.


I wonder if they have made it so that you can use milliseconds (1/1000) instead of (1/100) in setTimeout and setInterval. It was one of the things that annoyed me the most running Node.JS on Windows ...


I wasn't aware that Chakra could be used standalone in this manner. I had assumed that it was tightly bundled to IE. Is this new?



[flagged]


Ugh. You realize there are like 3 people who are still working there who were around when this was really a thing? Yes, Satya was hired in 1992, but he didn't grow up in that culture nor did any of the other current leadership. Most people working there now were in grade school when that was actually a strategy.


Given all that Microsoft has been doing lately I'd really be surprised if this is all an elaborate ruse to squash node. Having been around when the actual "embrace, extend, extinguish" was running at full throttle, though, I can tell you what it does looks like:

"Full access to UWP APIs from Node.js applications"

When you use platform-exclusive features then you not only lose a major strength of node, but you put the owner of that platform in a position to start pushing developers around.


It's a choice for the developers. They can use Node standalone or inside a Universal Windows application with access to the underlying platform using JavaScript.


That did wonders for Java.


When was the last time they went all the way?



Looks like it's been close to a decade. The CEO responsible for that whole ordeal is gone. When do people decide Microsoft has behaved long enough to be given a second chance?


I think it's pretty clear that this would take long time. Reputation it's not something company can get just by changing CEO or releasing few things as open source.

After all they still continue their aggressive patents usage and there is still uncertainness around audio/video codecs for web standards mainly because of Microsoft. Pretty sure others may remember some areas where their politics didn't changed for a bit.


> The CEO responsible for that whole ordeal is gone. When do people decide Microsoft has behaved long enough to be given a second chance?

If the company clearly changes it's course, right? Though it doesn't seem to be the case, despite two CEO changes since Bill Gates. Recent news show more or less what they are still up-to.

Yesterdays plans to stop producing Windows and "Win10 as a service": https://news.ycombinator.com/item?id=9521151

TypeScript influences the ES6/JavascriptNext feature & syntax development - lead by Mr. Hejlsberg (TurboPascal) who was responsible for J++, Windows Foundation Classes and C#. In 2012 Hejlsberg announced his new project TypeScript — a superset of JavaScript. (https://en.wikipedia.org/wiki/Anders_Hejlsberg )

Read Bill Gates (former internal) 1994 memo: "Windows: The Next Killer Application on the Internet": http://www.microsoft.com/about/companyinformation/timeline/t...

"The memo starts with a background on the Internet in general, and then proposes a strategy on how to turn Windows into the next "killer app" for the Internet."

The Internet plan "Microsoft Network": https://youtu.be/kGYcNcFhctc?t=17m16s

And the Office format thing in 2007, the Silverlight thing, the XBoxOne launch thing. It was just not as public, as they were not that successful.


All you did is make a list. Can you expand this into something coherent?

Office uses a documented XML format now, Silverlight is dead, and I don't see how Xbox One is in any way relevant.

Not sure how making Windows free is "embrace, extend, extinguish." Most operating systems are free.

People make new languages based on old ones all the time, so you're going to have to expand on the problem with TypeScript.

I could go on, but I'm not a teacher running through an essay with a red pen. You won't convince anyone with a comment-free list of events.


> Office uses a documented XML format now

It's still formats that can't be fully implemented without breaking compatibility with MS products.

PS: Personally I like what MS doing now and I accept that this can be serious change, but that would be dumb to trust them until like 3-5 years pass at least.


I will comment that, had you actually posted a link to the HN comment thread on your TS influencing ES6 development, you'd have noticed there are several comments as to how, no, they are not. In fact, they've diverged quite a bit from ES6, and now have work to realign TS with ES6's newer drafts.

I also really don't understand how a 1994 memo or the Microsoft Network plan are relevant. Those are both pre-lawsuit, how are they relevant today?


Might as well post some Friends quotes and some Pearl Jam videos.

I mean, as long as we are discussing 90s stuff.


I know Microsoft apparently has changed, but just to play devil's advocate:

[x] embrace

[x] extend

[ ] extinguish


They are very open that this is not their plan here. They are _temporarily_ forking Node.js to add support for their JS engine. They want to extend Node to abstract away the JS engine so that it doesn't rely on V8 or Chakra or SpiderMonkey but can sit on any one of them.

It's actually exactly the opposite. Their API abstraction work will only increase competition, especially since they aren't trying to run a competing fork. Despite their history, those in favor of a more open Node.js platform should commend this.


To play the devil's advocate of the devil's advocate, how exactly would an [x]extinguish work in an open source world especially for something that is under a liberal licence (MIT vs GPL)? Isn't that the whole point of open source? That if even something gets abandoned or ignored, as long as there is still an active interest in it, it can still be used or improved upon?


You're probably aware I'm not considering it to be likely that this is Old Microsoft in action but to humour the thought experiment: I don't think they could succeed either. IE is still only barely recovering from Microsoft's history and Windows has largely been defeated by OSX both in the consumer and developer space. We're unlikely to see Microsoft Space Nazis descend upon us from a hidden moon base any time soon.

That said, there are plenty of examples of the extinguish phase not working out or resulting in less of a bang and more of a whimper. It's always been more of an infected blanket than nuclear warheads.


Extinguish Javascript? Good luck with that.


I don't see how anyone can rationally trust Microsoft here given their track record.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: