Hacker News new | past | comments | ask | show | jobs | submit login
John Carmack discusses the art and science of software engineering (uw.edu)
271 points by gb on Aug 23, 2012 | hide | past | favorite | 71 comments



With the NASA style devel­op­ment process, they can deliver very very low bug rates, but it’s at a very very low pro­duc­tiv­ity rate

I wonder how many non-developers understand this. I, along with the rest of my team, am trained in PSP (http://www.sei.cmu.edu/library/abstracts/reports/00tr022.cfm) and TSP (http://www.sei.cmu.edu/tsp/) and we use it in our day-to-day development.

It definitely helps us keep our defect rate below one bug/kLOC but it's an expensive process that results in very low LOC/day productivity. If very low shipped bug counts are very important to your organization, great. But most businesses these days seem to care more about having a usable product than they do a perfect (or close to it) product. Especially if it's on the Web where you can do multiple releases per day.

As an industry, we really need to bear in mind that different business domains need radically different approaches to software engineering.


Can you follow up with some information about the (P|T)SP experience?

I looked into it about a year ago and thought it was a ridiculous amount of overhead, and the blurbs about the initial data Humphrey used to create it was not persuasive. The "take our class" ads were not encouraging either.

So if its working for you in actual development, I'd LOVE to hear more about what it does/doesn't do for you.


First, it helps to understand that even before moving to PSP/TSP, we were already in a process-heavy regulated Medical Device development environment, so it wasn't a big change. My understanding is that many teams starting with TSP didn't have much of a process to begin with.

The good: PSP encourages a high level of developer responsibility for quality. So you use a checklist to review your code before running the unit tests. You record every defect you find and if applicable, use that information to make a better checklist. Every team has a TSP-trained Coach to guide the process, answer questions, and keep the team on track. The metrics generated from the process are analyzed weekly to see if the team is on target, if quality is where the predictions say it should be and if there are any roadblocks.

The bad: it can be a major change to how you are used to working. The data collection, while as automated as possible, is annoying. The constant emphasis on tracking time on various stages of fixing a bug/adding a feature adds a noticeable amount of friction to your workflow. While it's not Waterfall, TSP is definitely not Agile. Its entire focus is on predictability of output. It's an attempt to take what works well for Manufacturing and apply some of that to software dev.

In short, TSP/PSP is a good idea at heart for those types of development where initial product quality is critical or where you may never have a chance to fix a defect. This is not the case for most instances of modern software projects.


I've gotten to know John a little bit, and I have to say it's a strange feeling to have a conversation with a person of his off-the-charts intelligence. I consider myself to be pretty smart-- was known as "the math whiz" in high school, went to top engineering universities and did very well there, have a couple of (minor) entrepreneurial accomplishments, etc. And yet talking to Carmack I feel like I'm talking to someone who is a full two standard deviations beyond me in raw intelligence horsepower. It's a pretty sobering and humbling experience.

Part of it is that he really does spend 8+ hours per day coding, every weekday, and has done so for 20 years. You'd think his experience level there is about as high as you can get, so it's always cool to hear him talk about the new things he's still learning at his work. I have to wonder if there's anyone else in the world that has both his raw ability and all those man-years of programming experience. It seems like most successful technical people end up doing management and business.

There's a couple of things people probably don't know about Carmack. For one, he can talk intelligently on a lot of different topics. A lot of nitty-gritty aerospace engineering, as well as the history of the space program and NASA for example. He's also up to speed on the latest across a wide range of technology, including things like cleantech.

Second, he has a pretty good sense of humor and can be quite funny. Which is surprising I think just because he spends so little time (effectively zero) out being traditionally social, which you'd think would be necessary to getting good at making people laugh. But in conversation he has a pretty sense of comedy and timing.

An example from his twitter feed that I clipped a while back:

https://twitter.com/ID_AA_Carmack/status/167739644853747712 "Adding film grain, chromatic aberration, and rendering at 24 hz for film look is like putting horse shit in a car for the buggy experience."


He is definitely right about the social aspect of software engineering. But I think he sells a lot of the tools short. For example, he brings up things like Monads, Lambda Calculus, and whatnot - but then immediately dismisses them as not affecting what one truly does.

But I think this really misses the point. In our industry it is really easy to disguise oneself as a professional (or even just someone who knows what they are doing), without really knowing much of anything. Meaning, our focus as an industry has been on making the simple things as simple as possible (i.e. scripting/dynamic languages, code generation, frameworks).

But what I see happening in the Haskell space for example (and even further in languages such as Agda) are attempts to distill things down to their elements. To find the true semantics behind a problem. This not only helps by producing cleaner and more readable code, but it also helps with communication.

I really do believe software is a scientific (and mathematical) exercise. The problem is most of industry does not treat it as such, and hence we end up in the mess we are in.


While I love what Haskell has done for me as a programmer, I don't think that it's finding the true semantics behind our problems. I rather think that it's finding a different basis. To take an analogy, monads and lambda calculus are like Fourier series. For many classes of problems, they produce a cleaner, simpler understanding of the solution. I'd hate to try and solve a boundary value problem with just Taylor series. However, while expressing linear functions with Fourier series is possible, it's less clear than the Taylor series and you're more likely to mess it up.

In the same way, monads, arrows, and recursion are great ways of describing many classes of programs. Additionally, they help with communication when your problem is a monad or an arrow. However, certain classes of problems are better described under other paradigms than being forced into the functional one.

This comes back to Carmack's point. It's important to know Haskell, since it's distilled computation down to a set of elements which are useful for describing a large class of problems. Being able to communicate these solutions is important. However, other paradigms are less error prone and do communicate solutions more clearly on other classes of problems.


I agree, but do not boil down Haskell to Monads. The most interesting abstractions are the ones you write. The only contribution Haskell has here is a flexible type system. It does not give you anything for free per se.

Rather, what I am saying is I see a trend in the Haskell community where the developers strive to find the best semantics for a problem and not just stop at the first arrived at solution because it works.

See all the conversations on pipes vs. conduit if you want an example.


> But what I see happening in the Haskell space for example (and even further in languages such as Agda) are attempts to distill things down to their elements. To find the true semantics behind a problem. ... I really do believe software is a scientific (and mathematical) exercise. The problem is most of industry does not treat it as such, and hence we end up in the mess we are in.

These statements suggest a view of science and math I find particularly insidious -- please correct me if that's not your view.

It's the view that software engineering should really be computational physics. All we need to do is figure out the laws, set up the equations for a specific problem, pick the best algorithms for the job, and hit Enter.

It's not unlike how the ultimate watchmaker created the universe. And best of all, there's no "monkey programming" (how I hate that term) in computational physics.

Truly romantic, and hey, I think I've just persuaded myself it's not so insidious after all!


He does seem to have a mental block on Haskell being too "mathematical or abstract" which is something I see a lot of in imperative programmers and am not sure I fully understand.

> I would like to be able to enable even more restric­tive sub­sets of lan­guages and restrict pro­gram­mers even more because we make mis­takes con­stantly.

Which I think is perfectly in line with the rigid Haskell type system and philosophy of "making the compiler do 90% of the work". The constructs that Haskell provides are very down to earth ways of composing programs that ensure that side-effects aren't implicit and lead to more correct composition semantics.


I program mostly in Objective-C nowadays, but I started professionally with PHP. When I first started Objective-C, I found it really constricting in comparison. In PHP you can do things a thousand ways, most of the terrible. In Objective-C, specifically with Cocoa, things are a lot more rigid and prescribed. I found this frustrating, but love it now. It's made my PHP better too when I do occasionally go back. It forced me to think more about architecture. I also understand why my CS dept taught us Scheme first, not Java.

To link that back to the post, this is the type of constriction he's talking about to make better programmers. Cocoa and Objective-C restricted me to only writing at least halfway decent code. With PHP, because of it's flexibility, you're free to get things done quickly, but in a terrible way. Sure with PHP you can do things right too, but it takes a lot more self-discipline and also a priori knowledge.

Sorry to post yet another rag on PHP.


Its not ragging when its constructive with a thoughtful argument.


I would suggest watching his entire talk on youtube: http://www.youtube.com/watch?v=wt-iVFxgFWk


I was glued to the screen for the entirety of that talk. That guy is completely engrossing to listen to.


Three and a half hours?? I better grab some popcorns then.


Definitely worth watching. It's more like a tech talk than a keynote. Extremely good insight in there.


Running your code through static analysis can be eye-opening. And just like when you opened your eyes for the first time... you'll probably cry.


If I had a dollar for every time I saw this...

if(obj == null && obj.isValid())

I'd have approximately 960 dollars per project.


> if(obj == null && obj.isValid())

Why would you encounter this (often)? It seems to me that this code would never evaluate to true.


This is Java code. If you test for a null reference AND THEN dereference it, you get a NullPointerException. The check was supposed to be "!= null".

This check is obviously erroneous, and found via SCA. I was saying "if I had a dollar for every time I found this [via SCA] I'd have a lot of money."


I'm curious as well, I understand the bug / typo, but I don't think I've seen it more than a handful in 20+ years of C-like languages (mostly game development). In contrast, typing a single '=' instead of '==' seems a more common error IME (although still not frequent enough to drive me towards Yoda conditions to catch it).

Why do you think you are encountering this particular mistake so often? Is it something about your projects, your organization, your colleagues, or Java itself?


It's a braino, I think. Instead of the right thing, which would be what mathgladiator posted below, this is a guaranteed null pointer exception when obj actually is null.


I hope he means

> if (obj != null && obj.isValid()) { }

Which is semantically correct due to the fail-fast evaluate of AND, it does require a double check since it's strange from a logic point of view. This is one the reasons that brace languages are hard to optimize since you can't commute things that ought to commute.


That statement can't be commuted because it expresses a concept that can't be commuted.

If the two checks were independent (pure functions), it's not difficult for a compiler to determine that given enough program visibility.


Static code analysis has a long-term benefit as well as the more obvious short-term benefit. That is, it teaches us to be better developers as we strive to have the static analysis catch less issues in our code the next time [1]. I used static analysis to improve my style for C, Python, and most recently Ruby [2].

I think it had a lasting effect on my personal coding habits. But every once in a while, I will use the tools on my new code and it still finds things. I would probably benefit from being more persistent in using these tools.

[1]: This assumes that the issues caught be your static analysis tool are valid concerns, which in my experience, they tend to be.

[2]: Some static analysis tools that I've used with Ruby are reek, roodi, flay, and flog. Reek and roodi report code smells. Flay reports structural similarities (opportunities for refactoring). And flog estimates the complexity of your methods.


Static analysis is essential for anything that lives solely in the domain of syntax and style. Detecting bad smells (including duplication and excessive complexity) from when you got tired or interrupted is perfect for it, and is a great heuristic way to find areas that need more attention. In Python and I assume Ruby, anything that bears on function is more reliably detected with unit tests.


what tool did you use for ruby?


I mentioned that in my second footnote.


Equivalently in a very introspective language, running old code through new unit tests is eye-opening...


The quest for perfection may be futile.

DNA is also code, and it's full of bugs. That code lives for hundred of thousands of years, if not millions.

Biological processes offer the suggestion that your system can be functional in the face of constant failures and random variations in behavior.

Biology can even offer a very high reliability rate. While we get sick all the time, and people are born with all sorts of genetically disadvantageous traits, many key processes are mind-bogglingly reliable. (No sight v No sense of touch: Compare the rates of blindness to the rates of congenital analgesia type 2)

While the math behind CS offers tantalizing guarantees of reliability the reality of software development and developers deliver a reliability far lower.

I think it is a fascinating thought experiment to imagine a development process where instead of writing any code, all you're writing is tests (or feature descriptions) and let the code adapt to the environment you've defined.


The quest for perfection may be futile.

Agreed, and I think it's easy to observe that fact with nothing more than your DNA example. In biology, perfection will always be outcompeted by "good enough."


One of the things I've been persuaded of is that software writing is fundamentally a non-scientific activity. Outside of the time and space constraints for a given subsystem, the craft of software is almost entirely subjective and limited only by mental capacities and flaws.


Richard Feynman wrote[1]:

"We could, of course, use any notation we want; do not laugh at notations; invent them, they are powerful. In fact, mathematics is, to a large extent, invention of better notations. The whole idea of a four-vector, in fact, is an improvement in notation so that the transformations can be remembered easily."

What he said about mathematics, I think it applies even more to programming.

[1] The Feynman Lecture on Physics, Volume 1, Chapter 17


  At PARC we had a slogan: "Point of view is worth 80 IQ points." It was based on a
  few things from the past like how smart you had to be in Roman times to multiply two
  numbers together; only geniuses did it. We haven't gotten any smarter, we've just
  changed our representation system. We think better generally by inventing better
  representations; that's something that we as computer scientists recognize as one of
  the main things that we try to do.
Alan Kay http://billkerr2.blogspot.com.au/2006/12/point-of-view-is-wo...


According to this, multiplication wasn't so bad: http://turner.faculty.swau.edu/mathematics/materialslibrary/...


Actually real Romans did multiplication on an abacus. Reading from Roman numerals to/from an abacus is an incredibly natural operation.

In Europe the disappearance of abacuses was directly tied to the rise of Arabic notation.


Without a table? Just from memory? I think I'd prefer decimal.


Actually, I think one of his strongest points is about making mistakes -- everyone does them and everyone makes the most amateur of them. If you haven't tried running SCA on your code, try it sometime. I've got a ridiculous ego, but when you get hundreds of warnings on your code, you realize just how imperfect you can be and how impossible it is to be cognizant of all things at all times. This is why teams are almost always better, and I'd prefer to review and be reviewed by someone else -- I'm my own biggest blind spot.


If he wrote a book about programming, I'd buy it in a heartbeat. Too bad that even he probably is still figuring everything out.


Well O'reilly recently did put out "Making Software What Really Works, and Why We Believe It" http://shop.oreilly.com/product/9780596808303.do which is a collection of essays backed by not lore, but actual scientific studies about software development. A few topics touched on in the book: • How much time should you spend on a code review in one sitting? • Is there a limit to the number of LOC you can accurately review? • How much better/faster is pair programming? • Does using design patterns make software better? • Does test-driven development work as well as they say? • How much do languages matter? • What matters more: How far apart people are geographically, or how far apart they are in the org chart? • Can code metrics predict the number of bugs in a piece of software? • Which is better: offices or cubes? • Does code coverage predict the number of bugs that will be later found? • What is right/wrong with our bug tracking systems today? • Why are graduates so lost in their first job?

If you haven't yet run across this book I highly recommend you check it out. At least for me it really meshed with my own quest to further delve into the mix of social and technical issues around software development. For more info on the book besides amazon reviews etc I also wrote up a blog entry last year which goes into more depth on the book http://benjamin-meyer.blogspot.com/2011/02/book-review-makin...


The more you know, the more you realize you don't know.

That would be a great book though.


> The more you know, the more you realize you don't know.

The thing I like about John Carmack is that he really appears to live this through and through. I can't say I know him or have worked with him or anything, but whenever I read something like this of his, I enjoy the fact that it is virtually free of ego and posture. He doesn't proclaim or state - he tries things, explores things, and talks about what he found, his successes and failures, and the next hill he wants to climb. He openly admits when things are more challenging than he thought they would be or if something he worked on didn't turn out how he wanted it to.


I submit that Carmack wasn't shaped by the same kind of industry culture that most of the working people on HN were - different time, different kind of software, different funding model. The way our industry incentivizes ego, posture and dishonesty (and punishes the lack thereof) means that young Carmacks either find a rare lagoon, change fields - or get squashed into a different shape. Smart people can still do good work, but the required output is mostly other than technical, and knowledge takes a back seat.

This is what we are spending our lives on!

Finding like-minded people is not impossible, but like-minded workplaces hardly seem to exist. If you are not a Carmack-scale God to create your own lagoon, you literally can't afford just to be straight up about everything. You will fail every interview. That is why this is so rare and refreshing - it is rare because the environment heavily discourages it. Only some people need an ego-oriented environment - but everyone has to eat.

There is actually a parallel in science. People who start out deeply interested in truth and their subject have to exist in an environment which really isn't about those things. They either leave or learn to self-promote, inflate the sizes of their grants and chase fashion rather than bearing down on a particular subject in a disciplined way.

If this culture won't change then we need more lagoons.

In Feynman's words: "So I have just one wish for you--the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom."


I came to that same conclusion a few years ago. Sometime in college I was pretty happy, felt I was making good progression on real knowledge in a field. Now, a few years later - I'm not sure what I know is real and true, which are high order approximations, and which are flat out wrong, but still right enough to not cause too many problems.

an example: in grade school you learn how to mix colors, primary colors with paints, etc. then a little later, in middle school you learn no.. for light based, its not the same type of addition. then a little later you learn there are like an infinite possible primary colors - then a little later you learn color is actually a frequency or combination of frequencies, and you start asking yourself if your friend detects these colors the same way you do. does red to you really look blue to your friend.

it just gets weird.


look what shows on HN today: negative frequency for light. things really do get weird.

https://hackerne.ws/item?id=4429234


love this example!


Yep.

Also, the smarter you are the more you tend to doubt yourself. Whereas less intelligent people tend to have more confidence in what they're doing.

Bites me in the ass all the time.


"Ignorance more frequently begets confidence than does knowledge [...]" - Charles Darwin

One of my favorite quotes.


The more you know, the more you know that you don't know. As you throw logs on the campfire, the perimeter of darkness grows.


As you throw logs on the campfire, the perimeter of darkness grows.

I like that one. I've always thought of knowledge as a sphere and what the surface of the sphere touches is the intersect of your knowledge with what you know and don't know. As your knowledge grows so the does the sphere and your awareness about what you do not know.


Not saying that business people are less intelligent (far from it, good business people are as rare as good hackers). But pointing out that in the business world, confidence is a signaling mechanism for success when they speak to others outside the business field. This easily gives rise to an impedance mismatch between the business world and the technical worlds. Appropriately enough, the very best business people I know when talking about an issue their own field are just as doubtful about themselves as good technical people.

Fractal difficulty of a field is the impetus for a lot of this doubt, and it seems pretty opaque to outsiders in every field I've seen. This seems to be prevalent here when many hackers do not acknowledge the difficulty of marketing, sales, accounting, etc.


one possible conclusion from that is that most "must-read" programming books are by authors who don't know that they don't know what they're doing.


The confidence or the doubt?


Just read his code. It is impressive.


Same here. I believe such a book would be a very valuable addition to my programming library - as in books, not software ;)


I look forward to the QuakeCon keynote every year. Anyone who engineers software should really listen to Carmack.

Also, this article's kind of silly... there's almost no discussion? Just watch the video.

EDIT: I guess it's not so much an article as a sharing of info. Still. Watch the video.


As much as I respect John Carmack, I have to say that I'm a little disappointed that he is rehashing this meme of software development not being a science... OK, wonderful, the state of the art in his shop doesn't rise to the level of being a consistently reproducible, measurable process, but that doesn't mean that this is a permanent condition or that its an insurmountable one.


I didn't get the same impression that you got. To me, Carmack is saying that the biggest thing to focus is how to enforce the known best practices. That we need to understand the social aspects of software development well enough to reap the benefits of improvements in the engineering side. There is science involved in that, but it is mainly stuff from the social sciencies.


Interesting comments on Mac and Linux platforms [1]:

"Other interesting sort of PC-ish platforms, we have... the Mac still remains a viable platform for us. The Mac has never required any charity from id, all of those ports have carried their own weight there; they've been viable business platforms.

I actually think that the Mac is going to become a little bit more important for us. Interestingly, we have a ton of people that use, like Macbooks at the office, but we don't have any really rabid, OS X fanboys at the company that drive us to go ahead and get the native ports out early.

But, one of my pushes on the greater use of static analysis and verification technologies, is I pretty strongly suspect that the Clang LLVM sort of ecosystem that's living on OS X is going to be, I hope, fertile ground for a whole lot of analysis tools and we'll wind up benefiting by moving more of our platform continuously onto OS X just for that ability to take advantage of additional tools there.

Linux is an issue that's taken a lot more currency with Valve announcing Steam for Linux, and that does change, factor, you know, changes things a bit, but we've made two forays into the Linux commercial market, most recently with Quake Live client, and, you know, that platform just hasn't carried its weight compared to the Mac on there. It's great that people are enthusiastic about it, but there's just not nearly as many people that are interested in paying for a game on the platform, and that just seems to be the reality. Valve will probably pull a bunch more people there. I know absolutely nothing about any Valve plans for console, Steam-box stuff on there; I can speculate without violating anything.

One thing that also speaks to the favor of Linux and potential open source things is that the integrated graphics cards are getting better and better, and they really are good enough now. Intel's latest integrated graphics cards are good. The drivers still have issues. They're still certainly not going to blow away somebody's top of the line SLI system, but they are completely competent parts that are delivering pretty good performance.

And one of the wonderful things is that Intel has been completely supportive of open source driver efforts, that they have chipset docs out there, and they work openly with community to develop that, and that's pretty wonderful. I mean, anybody that's a graphics guy, if you program to a graphics API, use D3D or OpenGL, you owe it to yourself at some point to go download the Intel chipset docs. There's hundreds of pages of them, but you really should read through and see what happens at the hardware level. It's not the same architecture that Invida and AMD have on there, but there's a lot of commonalities there. You'll grow as a graphics developer to know what happens down at the bit level.

Another one of those things, if I had more time, if I could go ahead and clone myself a few times, I would love to be involved in working on optimizing the Intel open source drivers there.

So, it's enticing, the thought there that you might have a well-supported, completely open platform that you could deliver content through the Steam ecosystem there. It's a tough sell on there, but Valve gets huge kudos for having the vision for what they did with Steam, sticking through all of it. It's funny talking about Doom 3, where we can remember back in the days when they're like, 'Well, should you ship Doom 3 on Steam, go out there, make a splash?' ... I'm like, 'You're kidding, right?' That made no sense at all at that time, but you know Valve stuck with it and they're in a really enviable position from all of that now.

It still seems, probably crazy to me that they would be doing anything like that, you know, but, it's something that's not technically impossible, but would be really difficult from a market, sort of ecosystems standpoint."

[1] https://www.youtube.com/watch?v=wt-iVFxgFWk#t=44m28s


The statement that people are less willing to pay for things on the platform is kind of refuted by how the linux demographic always pays almost double what people on osx, and triple what people on windows pay for the humble bundles.


And yet he's stating facts based on actual experience doing it.


These two facts are not necessarily in contradiction. At this moment in the current Humble bundle, the average price paid by Linux and Mac users is significantly higher than the average price paid by Windows users, but the total income from Windows is still way higher than the income from Linux and Mac combined, just because of the raw numbers of users.

It looks to me like there are some Linux users who don't mind buying games (and do not mind paying premium for that), but the majority is not interested in buying games at all, regardless of the price.

Moreover, my own experience with buying games is that (digital distribution aside) it was practically impossible to get Linux versions of games (well, at least in Czech rep., where I used to live). And if there was a Linux version at all, using it meant buying a box with the Windows version and the patching it. So, even though I use Linux for work, I used to play games almost exclusively in Windows (and then on XBox, which made things even easier).


So do we, based on the public data provided by The Humble Bundle. He captured and might have analyzed his segment of the market, (the last time 8 years ago!), we can do something with another segment.

There's really no need to be so snappy.


Okay, fair game I concede that he's probably right.


Don't forget the number of people.


Software development is usually about creating some sort of competitive advantage. And a fundamental tenant of competitive advantage is differentiation.

Standard implementations/algorithms/patterns are commodities and are purposely so.


The talk itself is well worth the 3 hours.


i just finished the devbootcamp program so as a person who was thrown into oo programming with no real cs underpinnings, it is interesting to hear him talk about the social component . it is also pretty interesting to here him talk about making mistakes and the need to just get things done versus optimization.


[deleted]


...who cares?

Do you have any thoughts or comments about the actual topic of the post?

This is an article written by one of the most innovative, influential, and successful software developers in the world, giving a lot of interesting insights and thoughts on his craft.

...and you're concerned about the HTML/JS of some shitty wordpress template? We get it, you probably work in web development. Yes, poorly-coded websites exist. That's not what the content of the post is actually about.

This is part of the decline in quality of Hacker News. When an article is posted, let's talk about the article, instead of pedantic off-topic garbage.


For me, it stated that Javascript for Safari Mobile was turned off. Same result, though; remove two overlays -- one for the JS message, and another that simply masks the underlying content, and that underlying content is perfectly visible.

I guess it's kind of pointless to comment, but I continue to resent the applicationization of basic content (data). In good part because it remains a security concern.


I don't want to be an ass, but this seems like a lot of vague contradictory rambling. Maybe I missed some big ideas in the parts I skimmed?


I was also a bit confused. He said that stuff like monads are nice, but not that useful in the end. On the other hand, he seemed concerned about how to enforce best practices, to which I count monads in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: