Hacker News new | past | comments | ask | show | jobs | submit login
An all-optical general-purpose CPU and optical computer architecture (arxiv.org)
197 points by PaulHoule 10 months ago | hide | past | favorite | 103 comments



"we will be implementing a 2-bit variant of SUBLEQ for demonstration purposes"

What they actually built was a 2-bit wide machine with one instruction. No, they can't run Doom, which they mention a lot.

There's a lot of hand-waving about memory, around page 10. They seem to have used a delay line, which is very slow; you have to wait for the bits you want to come around. That's been a classic problem with photonics. You can build gates, which is nice for switching packets, but how do you store data?

Much of the architectural discussion is about what you can do if memory is mostly ROM. They talk about fast-read, really slow write memory. Here's an article about building something like that.[2] It's a clunky technology. Writing involves on-chip heaters and switching memory cells back and forth from amorphous to crystalline. There's a long history of forgotten devices like that - photochromic memory, UV-erasable EEPROMS, rewritable DVDs, Ovonics, etc. All were superseded by something with better read-write properties.

The underlying device technology is not theirs. It's from the Cornerstone project.[1]

[1] https://www.mdpi.com/2076-3417/10/22/8201

[2] https://www.nature.com/articles/s41377-023-01213-3


> You can build gates, which is nice for switching packets, but how do you store data?

Logic gates can be the fundamental building block. So if you can build logic gates, then you can use those to build flip flops (the basis of static RAM) which store data. Might not be the most efficient way (depending on your requirements) but it can be done.

https://en.wikipedia.org/wiki/Flip-flop_(electronics)#D_flip...


Nit pick: SRAM is not built out of flip flops. It's carefully sized inverters and a whole lot of analog magic (writing a bit involves overpowering the inverters of the bit with larger drivers). You may be thinking of latch arrays.


The overpowering of the inverters is just an area efficient trick to make gates (the so-called wired-OR or wired-AND).

The SRAM memory cells are simpler than flip flops, they are just S-R latches (i.e. equivalent with a half of an M-S flip-flop) made from two gates (either NAND or NOR).

In any technology where static gates are possible SRAM memories are also possible.

However there are technologies where only dynamic gates are possible, i.e. gates that provide an output that is valid only during a clock pulse and which cannot remain valid indefinitely. Only in such technologies you cannot make SRAMs, but you can still make dynamic memories, which must consume energy all the time, for refreshing their content.

All the "analog magic" that you mention has only the purpose of making an SRAM array much denser than when implemented with the standard gates of a technology, but an implementation with standard gates is always possible and it may be chosen for certain register files, where high speed may be more important than the occupied area.


The main point is that that you CAN build storage out of gates.


A 68k CPU has 68,000 transistors. If they have managed to produce all the building blocks of a CPU on a common optical process, why not build that?

Obviously this is only an easy option if the production is automated via lithography or something similar. If the process involves tweezering parts into place by hand, then a 2 bit SUBLEQ CPU is what you're gonna get...


Imagine this in a hypothetical setting such as Science Fiction. Where instead of an optical delay loop it's a fold in space or some other mechanism. Photonics and some sort of 'subspace' or toggled shift in reality? Maybe.

I agree though, I've never seen anything explain how to make this work well as a general computer with contemporary tech.


Ok I’m sure this is stupid but could memory be a glow in the dark chemical?


There's photoswitchable molecules and fluorophores.

https://en.m.wikipedia.org/wiki/Photoswitch


> They seem to have used a delay line, which is very slow; you have to wait for the bits you want to come around.

Macroscopic delay lines sure, but microscopic ones presumably can be on the order of a wavelength, if that's all that's needed. Not much time needed to come around in that case.


What about things like ai surely if gates are possible that means you could encode and entire model in photonic gates, surely that’s worth it


Photonic in-memory AI inference would be a holy grail imo.


Photonic and quantum … and humanity is toast! It was a great run!


i don't know anything about optical computing, but a 2-bit wide one instruction machine can 100% run Doom. if it is turing complete it can run anything and SUBLEQ is turing complete

or what are you referring to?


All optical computing surfaces again!

Warning.

There is NO opportunity for large scale integration, the MOST IMPORTANT ASPECT in computing. This is because the de Broglie wavelength of the information carriers, typically 1.5um, is so HUGE.


There's some factors to consider here. For visible light yes, the smallest feature probably won't be smaller than several hundred nanometers, however, optical computing comes with several major advantages over traditional electrical circuits.

The first is that light beams can cross paths without interfering with each other, allowing for a level of parallelism and density of signal paths without the concern of crosstalk/interference or shorting. Additionally, the information density of an optical signal is vastly higher than an electrical signal, and multiple optical signals can share the same pathways simultaneously. Also, energy usage is greatly reduced, so the constraints due to heat waste are much less.

Having said all that, the idea of optical circuits in a VLSI is still a very foreign and exotic concept for us, so it's hard to say how far we can take it if we invest at the level we have for electrical ICs. It's naive though to say it's not feasible due to some oversimplification of feature size limitations.


Unless computation can be accomplished on differing overlapping frequencies, wdm devices, which resemble train switchyards, are huge, on the order of mm. So optical computing would have to take place at multi THz switching rates using very short pulses.


As I understand it, microring resonators can be very small, µm rather than mm.


um is huge.


Since you can carry 100+ frequencies around 1500 nm, the feature-size per data stream is more like 15 nm/stream, which is closer to electronics, and the heat dissipation of optical waveguides may be much lower than with equivalent electronics. Photonics also have the advantage of better signal propagation over longer distances, for efficient interconnection of multiple devices.

So there is a likely opportunity for large-scale integrated photonics, as long as you have enough parallelism.


Referring to computing, not data transmission. Perhaps some day, using quantum computing, all those differing ‘portly’ photons, superimposed, could yield advances in density. To mix different wavelength streams on chip involves wdm devices, which are huge.


Optics/photonics can potentially perform analog as well as digital computation. One trendy thing at the moment is accelerators for neural networks.

Some potential benefits of optics include high data rate, parallel processing of multiple streams, transmission over longer distances, and lower heat dissipation.


Good point.


> There is NO opportunity for large scale integration

What about vertically?

CMOS logic is still "mostly planar". With sufficiently low heat dissipation, you could make a cube and easily overcome the planar density problem.

The main challenge seems like it would be lithography cost for each of the many layers, but if the minimum feature size is 1.5um, there might be a clever way to make this work cheaply (DLP projection + gradual extrusion?)


Hundreds of of millions of dollars have been raised from naive investors by ignoring this fact. Often, board-member and founder physics PhD’s aid in the deception by omission.


Why can’t we reduce the wavelength?


shorter wavelength light really doesn't like existing. It takes more energy to produce, there are way fewer possible materials to make mirrors out of etc. Just look at how much trouble the industry had doing EUV lithography.


Even UV wavelengths aren't terribly small, and the shorter the wavelength, the more energy it has and the more likely it is to destroy whatever material your optical CPU is made of.


Sounds like an engineering problem, not a fundamental one.


Not sure what gives you that idea. It seems unlikely that there are materials that can withstand billions of x-ray pulses per second and continue to function without being altered. They might exist, but the higher the energy to get to low wavelengths for fast and information dense computing, the increasingly implausible it gets that a suitable material is physically possible.


I am shocked, shocked that quantum computing researchers might have figured out this trick too.


Not sure what this means. De Broglie waves are defined for matter (mass is required). While photons have relativistic mass, this isn't the same thing.


The deBroglie wavelength of the photon IS its wavelength. That’s why it can’t squeeze into nm features and optical waveguides are still not used on-chip, after 35 years of effort.


"Information carrier" means the actual medium the light is travelling through, doesn't it? Which has to be matter of some sort.


Last time I checked the sun transfers its light through the vacuum of space to us.


Sorry, I must've missed that these optical CPUs contain vacuums of space for the light to travel through.


Well there is no matter in an struct vacuum obviously, so they are hard to see.


Indeed, as orlp mentioned, light is self-propagating and does not require a medium. This is broadly true for all EM waves.


But it needs SPACE, on the order of a few um minimum, that cannot be occupied by other devices.


We're talking about a CPU, not light traveling in a straight line forever.


Why isn't the EM field the medium? Or even spacetime?


You are confusing geometry, and the excitation traveling through that geometry.


If I was designing one of these things my goal would not be to replace present day computers - which at this point is nearly impossible given the millions of man hours spent optimizing them - but to carve a niche where you outperform them in specific tasks. I have the vague impression that should be possible.


Plasmonics may solve this problem. The interaction of light at an interface can lead to what essentially amounts to photon confinement. This allows for what's called near field optics which overcomes the limitations of wavelength and unlocks nanometer scale optoelectronics. For examples, see the solar sail for the "starshot" project.


That's why most optical computers lean into quantum computing.


Exponentially more powerful.

But unfortunately, for small N, like the N = 2 bits here, the additional complexity of pure optical + quantum computing just doesn’t pay off!


Right, Modern computers don't know what their purpose is, that makes it such a mind boggling challenge. Nothing is ever good enough. If you define the purpose it can be better and much cheaper.


Please elaborate?


Current electron based computers are 10s of nanometers per transistor. Optical equivalent of transistor cannot be smaller than 1um. Equivalent optical CPU to your smartphone would be the size of several football fields.


Asking from a position of total ignorance. The energy savings mean you can increase clock speeds, right? Assuming a big enough jump, won't that relieve a CPU from the need to have most specialised instruction sets and potentially also that many cores? In that case, wouldn't it be acceptable that transistors grow (back) in size?


Energy savings on what basis?

If your gate gets 50x50x50 times bigger, you need some pretty extreme savings per area/volume of circuit if you want to reduce the per-gate usage. Can they save that much?


I would think yes. Which would be something. Huge rooms of enormous optical computers running lightening fast on low power would have a kind of retro future feel.

Light would reduce the time cost of distance, and increase the density of connections (optical signals can pass through each other) so this could actually work.


> Optical equivalent of transistor cannot be smaller than 1um

For classics optics. Exists superlens optics, which using metamaterials and monochromatic light source, and could "see" artifacts of size much less then wave length.


While this is true doesn’t ignore the difference in clock rate capacity ? If the photonic cpu can run 10,000x the clock rate without the extreme heat build up that would melt the smartphone


Isn't part of the point that you don't need as many transistor equivalents because you can run them thousands of times faster?


One potential of optics is teraherz frequencies.


Do optical computing need to use visible light?


Claims like this require peer review, not to mention this is a private company not a university lab.

I’m worried over this trend of private companies putting press releases into LaTeX templates.


I think there are two ways to look at it, both of which are true: 1) scientific literature is being polluted with non-peer-reviewed PRs which makes it harder to figure out what is actually well validated, and 2) press releases are being nudged into being a hell of a lot more technically substantive and rich in relevant citations. The first one isn’t great, but as consolation prizes go, (2) isn’t bad.


The peer review community has torched its reputation over the last decade, so it should surprise precisely nobody paying attention that profit-motivated publishers are crawling over what remains of that barrier.


Outside of academia I don't think anyone realizes just how broken the system is.

Citation extortion rings are part of every journal. I had a reviewer from Nature give feedback that I should cite her co-authors work on a topic that had nothing to do with my paper. It got rejected because I wouldn't. It went into archive and has been cited nearly a hundred times now. To add insult to injury Nature News asked to interview me about my work.

Some more info on the subject, and a vast underestimation of how prevalent it is: https://www.science.org/content/blog-post/cite-my-papers-els...

At this point if you can figure out how to make a pdf paper using latex I consider your work to be on par with anything in a journal.


"Citation Extortion Rings" - I can relate to that:

The head of the jury for my thesis defense had no shame in opening complaining in the public defense session:

"I am THE authority in the field [never had heard of the guy before], why the hell you didn't come to me!?"

The project had already been international award-winning, but then the thesis about it (from the single author) received a bad grade.


I'm a layman surrounded by laypeople and we regularly make fun of peer review. It's far from insider baseball, and verging on reality tv for a small but growing segment of the shitposting nihilistic demo.


I disagree, people who are not expert consumers of information, but do think “arXiv is science stuff” are easily misled, and I trust citations from private companies without strong academic pedigree as worthless at best, harmful at worst.

It’s pretty easy to con investors if you have the same “look” as a real lab.


> I disagree, people who are not expert consumers of information, but do think “arXiv is science stuff” are easily misled

I think you are talking about an extremely small segment of the population, so I don't think we're talking about a very large social impact. I'm also unconvinced that that segment doesn't generally take ordinary tech press releases at face value anyway.

> I trust citations from private companies without strong academic pedigree

OK, but the first two authors on this have doctorates in ML and applied photonics respectively. They don't have peer review on this paper, but I don't think you can say they're lacking in academic pedigree.

> It’s pretty easy to con investors if you have the same “look” as a real lab.

I don't know. My feeling is that the "conning investors who are terrible at due diligence" game is largely unavoidable and mostly a zero-sum competition between con artists. So while it's obviously bad, I'm not convinced that the specifics matter all that much. Fools and their money, and all that.


Fair points all, but the fact that this made it to the front page of HN is itself problematic vis-a-vis the points raised.


Are you sure? Are you expert enough to read those papers? How do you achieve large scale integration in waveguides transparent to available laser sources? How does 250nm compare to the integration in your cell phone’s cpu?

And we are decades away from modulatable miniaturized 250nm laser sources. It is typically 1.5um with today’s devices.


> Are you expert enough to read those papers?

Not in this case, no. But in cases where I do have more knowledge, the additional detail makes it much easier to tell if there's anything of substance there, compared to traditional press releases which just make superficial marketing claims with minimal technical detail.

And if this were something more relevant to me, but where I didn't have expertise necessary to look at it, I could reach out to someone with the expertise needed to take a look. The point here is that it's very difficult talk at great length and in great detail about BS without making it apparent to experts that you're talking BS, whereas with a more traditional PR, the best you can often say is "well, if this is anything, these are very big claims."


> I’m worried over this trend of private companies putting press releases into LaTeX templates.

People need to realize once and for all that templates no longer represent quality or truthfulness, if they ever did. Maybe that lesson has to hurt a bit.


People have forgotten the whole point of “preprint” is you’re still supposed to, ya know, print!


Most people don't have printers anymore, they'd rather read it online /s


I wonder if there’s room for a section on Arxiv that is exclusively for papers that are on a peer-review track.

Or maybe “peer or open review,” or something like that.


Peer review for commercial press releases is an interesting idea!


Now that is a job for GPT if I ever heard one. Garbage in garbage out.


The arguments from the abstract of this paper have been refuted by Attojoule Optoelectronics for Low-Energy Information Processing and Communications: a Tutorial Review [1] and several other papers.

[1] https://arxiv.org/abs/1609.05510


Of course the arguments of [15] in the paper are the main refutation. Paraphrasing: Optical transistors will need to match all these criteria before they can compete with bulk CMOS. They don't and physics predicts they won't anytime soon. We will replace wires with optics though [1].

This discussion started in 1959 when Feynman pointed out we eventually will create things at atomic sizes with elementary particles [2].

[2] There is Plenty of Room at the Bottom -Richard P. Feynman (Dated: Dec. 1959).

https://cdn1.richplanet.net/pdf/0099.pdf

[15] D. A. B. Miller, “Are optical transistors the logical next step?” Nature Photonics, vol. 4, pp. 3–5, 2010.

https://www.researchgate.net/profile/David-Miller-65/publica...


Hello Morphle, I tried to email you regarding the post you replied to a good while back about an ISP-related post. It got bounced from the ziggo.nl address. Have an updated contact point?

I have updated my contact info if you still want to chat.


You can try morphle73 at gmail dot com.

Email servers do reject messages quite often, for example when they look like spam.


I'm still digesting this interesting paper but the Figure 3 chart is particularly informative as it puts the whole aspect of electronic and optical computing into perspective.

I cannot recall ever having seen Power Dissipation versus Compute Performance together with Total global power generation, Total global data center electricity consumption, Electronic thermal noise limit and the Landauer Limit all graphed together before. Presenting the data in this fashion provides a stark and very clear overview of what's actually possible together with the theoretical limits. Graphing the Landauer Limit is a masterstroke because we can instantly see computation vs power efficiency for any given tech.

I think the visual impact of this chart is important enough to see it expanded further and the authors and/or others should think about doing so. It would make an excellent poster-sized lab wall chart if the graticule lines were subdivided from 10³ to 10 (leaving 10³ lines bold) to provide finer granularly and allow more detail of the tech together with the dates of their introduction and phase out, etc.


Figure 3 is intriguing to me: I thought GPU's where order of magnitude faster than CPU (because of parallelism) and although the chart is log-scale, I don't see that. I wonder why...


Well, I'd have said so too, but perhaps it's how they've defined the speed. GPUs are generally slower than CPUs when it comes to clock speeds, it's just that they're much faster when hundreds of them are run in parallel, it being the usual situation.

You've raised a good point through, tables/charts of this type should be footnoted with definitions/conditions etc., and of necessity should only be considered high-level overviews—a bit like the Periodic Table which contains only key information about the elements.

That said, I'd like to see a more precise version of this table. There are good examples to follow, one often comes across really good lab posters like this where the main chart is actually footnoted with smaller charts detailing the specifics of objects to avoid clutter and or to represent info that would otherwise have had to be projected in 3D views.

Edit: incidentally, I find it annoying that so many authors of scientific papers fail to define and label graph axes properly, same with equation terms. It's all very well for professionals to resort to jargon and shorthand when talking amongst themselves (I even do it myself), as they are dealing with their subjects on a daily basis but it's a different matter in publications where papers are read by a wider audience.


A lot of people seem to be mentioning the 'size' of electrical transistors, but this seems to fail to include complexity scaling (which is non-linear)? The lower complexity of optical chips (due to large switch sizes) can be made up by the faster switching speeds, which boast a linear increase in performance if I'm not mistaken?

Really interesting paper!

Edit: Advantages of multiplexing are very real too!


Indeed, faster speeds, greater efficiency over distance, lower power, higher density interconnect, … it creates a very different balance of constraints from what electronics are optimized for.


I was kind of expecting the first paragraph of this paper to explain first and foremost how they solved the switching problem (i.e. a transistor) using optical only components.

After wading through the paper for 10mn, I still haven't found the answer. If someone spotted it, please point where they talk about it, I would be grateful.

Or I could go ask an AI to find the answer for me I guess.


I believe this:

The almost canonical way of performing all-optical switching and logic is to use semiconductor optical amplifiers (SOA) and exploit their cross-gain modulation (XGM) or cross-phase modulation (XPM) capabilities[21]. With very reliable devices having been shown over the past 20 years[22], SOAs have proven useful for various types of all-optical operation, including decoder logic[23, 24] and signal regeneration[25, 26, 27]. The recovery time of the SOA limits its performance, but it has been shown that more than 320 Gbit/s[28]all-optical switching is possible, with some implementations enabling even the Tbit/s domain[29].


And at 1.5 um. Compare that to a 3nm electron architecture.


Are you talking 3 actual nanometers or a "3nm" process? It's difficult to compare.


Also, I remember when 20 years ago people said building transistors smaller than 45nm will be impossible.


I think the wavelength of light is a physical brick wall.


The optical equivalent of transistors have useful applications, but there are many shipwrecks on the shore of general-purpose optical computing, going back decades. I need more than a paper to get excited.


At least they built a PoC. Many others just do simulations.


A “Tiny Zork™” might be a good first step on the way to optical computer Doom™.

I once created a Zork clone on a 4096 byte Tandy TRS-80 Pocket Computer, with a one line text display (and graphics using 64 3x2 pixel “bitmap” characters).

My Zork had a small map, a simple parser and some of the starting objects and puzzles from the original. It is incredible how many efficiencies you can find when you have no alternative and think you are building something really great at the time

And if you think that’s something, now imagine playing pong against it across a 3 x 48 pixel screen (1x24 characters, of 3x2 pixels each). Every volley a straight shot, or lots of rebounds, as the ball made its way across the “table”.

So yes I was an early game developer and I am so prepared for 4-bit optical so my special skills can be appreciated once more!


It's ironic that they cite Millers work, but don't address the main conclusions from that work, i.e. that optical computing is horribly inefficient. Photons are bosons and therefore are very reluctant to interact, essentially requiring nonlinearities. The issue with nonlinearities is that they fundamentally require comparatively high optical intensities.

The authors mainly address the issue of integration density (which is also an issue), but not in sufficient detail the problem of efficiency. They handwavy this away by referring to 2d materials, but 2d are not a pancea. It's true that they exhibit very strong nonlinear coefficients (although I'm unsure if even that would be sufficient to overcome the efficiency challenges), however the overlap between the optical field and the 2d material is fundamentally very small (a single sheet of 2d material in the plane of propagation), so the observed enhancements have been very modest.


First of all, you are correct that nonlinear optics usually requires high field strengths. But...

>Photons are bosons and therefore are very reluctant to interact, essentially requiring nonlinearities.

Please don't throw out random sciency terms. First of all, interaction is pretty much by definition nonlinear. Second, photons are not reluctant to interact. Photon-photon scattering is negligible (which has nothing to do with them being bosons, as gluons and mesons readily demonstrate), but nonlinear optics doesn't rely on photon-photon scattering.


Electrons interact with each other, photons don't.

What magical device do they use to take TWO separate 1-hot encoded optical signals and produce a single 1-hot output? Figure 6 just shows a black box labeled "decoder" which is never explained, anywhere.

I think this word salad^H^H^H^H^H paper might have been produced by an LLM.


Photons do interact with each other:

• Gravitationally.

• Conversion to matter-antimatter pairs.

• In non-linear optical media, which is technically all substances, but effect strength varies.

Of course electrons are involved in the last of those, but not in the way you mean. Non-linear optical media electrons remain bound to their atoms, and act by orbital resonance effects.

The last of those is a realistic path to a general-purpose photonic computer. I worked on a design for one, and I was surprised to find our design would run not much faster than a good electronic computer, while being larger, due to optical wavelengths. But it may have been more energy efficient if we'd built it. Or rather, the calculation part may have been more efficient, with memory being less efficient than transistors. Also we didn't spend much time optimising it.

Small photonic machines are good for particular calculations such as energy-efficient FFTs, but that is smaller scale than a general purpose processor.

(If anyone is interested in photonic computer design, feel free to get in touch!)


Yes, I know about gravity and antimatter. Let me know when those become relevant to microelectronics.

And, nonlinear optical media do involve electrons -- that was precisely my point. And that's why those "optical" computers perform no better than electrical computers: because they are in fact electrical computers that use photons for communication, not for computation.

But using photons for communication is nothing new, we've been doing that since the late 1970s.


> And that's why those "optical" computers perform no better than electrical computers: because they are in fact electrical computers that use photons for communication, not for computation.

No, in this case photons would be doing computation across a substrate with electrons in it.

Similarly, in electronics the electrons do the computation on a doped substrate. We don’t say the doping is doing the computation, although it defines what the computation will be (I.e. the logical behavior).


This is the best paper I've read in a long time -- this has to go in my "Top 10" favorite posts of all time on HN...

(To PaulHoule: Another truly excellent post of yours to HN, thank you very much, the HN community and myself appreciate it greatly!)

Anyway, let's delve into it -- here's the key quote, IMHO:

>"As the previous discussion showed, SUBLEQ is, of course, not the target realization for optical computing. Its purpose is to showcase the simplest form a general-purpose optical computer could take and an intermediary step we take.

It can be implemented with less than 100 logic gates and, given enough memory, able to emulate a full x86

with a graphics card running Windows and Doom™ loaded, while crunching AI models as a background task (admittedly all extremely slowly)."

Now that is truly awesome!

Also, it should be pointed out that if SUBLEQ could be implemented optically, it could also be implemented digitally, say, on the smallest of small gate count FPGA's...

While such a FPGA Soft CPU would not be fast -- it would definitely be interesting, and probably very simple (comparatively!) to implement!

(Also, it might be implementable on a tiny IC, for example, Sam Zeloof's "Z2" 1,000 gate IC: https://www.youtube.com/watch?v=IS5ycm7VfXg)

Anyway, 5+ Stars for this excellent paper!

Upvoted and favorited!


I like idea of optical delay line for registers. For other parts, great amount of work need to be done to achieve something working, so unfortunately, extremely risky project for now, need some time (years?) to implement all parts alone.


This article teaches optics people about compute arch.

Which is maybe fine, but its not still not clear how to implement these components.



Don't have time to read it all but the abstract states,

"With our research, however, we are focused on the phase thereafter. Once optical interconnects and interposers have been fully established and are the main mode of inter-chip and intra-chip communication, solving the 6× inefficiencies... "

Therefore this paper is more about a computer architecture that will be in place AFTER general purpose hardware swaps from electrical to photonic communication, but we aren't there yet. Also seems the paper is more about tackling the next phase of `energy efficiency` problems that will arise after the swap.

Still useful info to consider but i agree with most here that these click-bait titles in research are abused. But I can't really argue it got me to click ;)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: