Hacker News new | past | comments | ask | show | jobs | submit login
Overcoming Intuition in Programming (amasad.me)
113 points by bpierre on Jan 4, 2016 | hide | past | favorite | 78 comments



I talk with a lot of programmers that think that abstracting everything is super important. Everything must be a black box and the programmer should have no idea what's happening inside of it. Their code is an over engineered mess of interfaces, dependency injection, and code so deep and isolated you have no idea how anything ever works (I mean like even if you had the source code it's so convoluted you could never make any use of it.)


> I talk with a lot of programmers that think that abstracting everything is super important.

OOP languages basically back you into this corner once a problem becomes complex enough. As an example, DI is supposed to decrease the coupling so that tests can actually test units and not wind up performing integration testing. Low coupling results in a more agile codebase, but does not intrinsically lead to more understandable code.

What goes wrong is people think that you need to "DI all the things." Like any tool it can be used incorrectly and, from what I have seen, violating the single responsibility principle [when using DI] is what most often leads to unwieldy code. "Got a model class/object? That needs an interface too."

Abstraction is not to blame, it is the incorrect usage of abstraction that is to blame; which is sadly what you are taught in school and college. Spending even one day learning a functional language is enough to drastically improve your understanding of how abstraction is supposed to work in OOP languages.

> I mean like even if you had the source code it's so convoluted you could never make any use of it.

Some IDEs can help with this (VS2015 just added "go to implementation" for C# interfaces), as does the most reliable way to understand code: stepping through it line-by-line (or at least method-by-method) in a debugger - once you have a run-time itable the callee is no longer ambiguous.

TLDR; it's a necessary evil for OOP languages but is often taken too far due to bad theory that is taught to us all.


I don't know if functional languages are much better in preventing abstraction abuses. I mean, a triple indirect function, an incredibly pointless (aka point free) chain of applications, a heavy reliance on data flow, can make the program difficult to understand, and in some languages (the lazy ones), stepwise debugging is difficult.


OO suffers from the design pattern kind of abstraction as well as the explosion of nouns kind of abstraction because of trying to model everything as analogies to real world concepts rather than focusing on what the program is meant to do. Functional programming suffers from a different kind of gratuitous abstraction, namely the category theory abstract nonsense kind.

I think the main source of overabstraction is the idea that abstraction is a virtue in itself rather than only a tool to accomplish a task. This often takes the guise of different artificial value systems like "testability", "decoupling", and "reusability". These can be good things, but only as a means to an end. People lose track of that sometimes.


I don't think that is an accurate portrayal of OO. There are plenty of nouns out there that do not model real world concepts. I can create an "expression" class/object in a OO program (and do all the time!) to represent parts of an abstract syntax tree. As far as I know, expressions don't exist in the real world. Human language is quite powerful like that.

Overabstraction is often simply a matter of poor problem understanding. Sometimes you don't really know how to solve a problem right, so you bite off parts of it, work your way through many layers until it is solved. The abstractions are not ideal, because not much planning was involved, and you didn't actually have the experience to generalize appropriately but...you needed some way to decompose the problem.

Abstraction is a tool. It has a cost, but sometimes you need to pay it and sometimes you are better off not. We really need better ways of abstracting and, when necessary, unabstracting.


I think that's right, especially the second paragraph about the cause of the explosion of nouns. A common approach to OO programming is to take the problem and identify the nouns in the problem, and then make a class for each of those nouns before having a clear idea of how the program is going to work. This is especially prevalent in combination with TDD. This is a recipe for disaster if you ask me. A good example of this is when Ron Jeffries tried to write a sudoku solver.


Agreed.

> identify the nouns in the problem

This is what I was getting at by mentioning education as a possible culprit. You get out of university with this bad habit right off the bat, instead of the correct one:

Identify the responsibilities in the solution.

Identifying all the nouns in the problem sets you up for failure because you are implementing a solution before you know what the solution actually is. Said another way: you are in a very literal way implementing the problem and not the solution.


What is so wrong about that? You have to understand the problem in some way. Perhaps we could spend a lot of time at the whiteboard understanding the problem waterfall style before we design the actual program? Realistically, this isn't very effective either.


There is nothing wrong about noun identification in itself, you have to solve the problem in some way. The functional approach avoids the nouns, the solutions can be more elegant, but man, coming up with those solutions is much harder. Decomposing a problem without nouns is like doing...math: it is much more difficult than just talking about it! Once you have done the math, you really understand the problem, but that is a very steep mountain to climb.

I keep coming back to RPG's "worse is better" essay. The problem is that while the artifacts that we create might be horribly messy, they are better than the artifacts that are never created at all. Evaluating at a program purely by its end resulting state does not capture the entire, or even most, essence of what it means to write a program.


> Once you have done the math, you really understand the problem, but that is a very steep mountain to climb.

s/really/actually/

And yes, it is hard, but it actually works.


Right, and in the meantime your competitors have shipped and taken over the market with their sub-optimal understanding of the problem, and you can't sell your solution even though it is perfect.


I'm not saying that functional languages don't have their own problems; however, for some reason that I can't explain they increase your wisdom regarding OOP languages - learning one did that for me at the very least.


Probably because the more examples of programming paradigms that you learn, the better you can generalize, erm, abstract that experience into good form.


Dependency injection does not decrease coupling, it hides it.


Writing code against IList instead of ArrayList, or IDatabase instead of PostgresDatabase unambiguously decreases coupling.


Not if you explicitely configure the IList to be an instance of ArrayList and expect it to behave like an ArrayList. In that case, just taking an ArrayList is the more honest and more readable option.


Well-said.


There are several things contributing to over-abstraction. Two that I can think of:

1) Professional programmers are constantly hammered with the idea that abstraction and decoupling are always desirable - you've probably heard about TDD. Somehow OOD has been taken over by this school of thought and there are very few good resources on what good OOD is and looks like.

2). Dynamic OO languages force one to have high coverage and many tests to make up for the lack of static verification.

And I don't agree with the TLDR, over-abstraction is not a disease that affects only OO languages.


> over-abstraction is not a disease that affects only OO languages

Agreed, it's present in other languages and has been with us a long time. It does seem to have reached a fever pitch in mainstream OO culture. OO seems to encourage taking even a small piece and giving it the full-blown treatment of layers, design patterns, DI, and the rest. Because each piece is a whole, in a way, so it deserves it, right? Give each part of a program that treatment, and the whole becomes unwieldy to a degree that was seldom seen before OO.


Are they mostly Java, C#, or "advanced C++" programmers by any chance? I've noticed such designs seem to be the norm in those cultures. On the other hand, C and Asm programmers tend to be more KISS+YAGNI in their code. Perhaps it correlates with the ability of the language/tools to make it easy to generate huge amounts of code without much thought: if you can create a dozen classes with a dozen methods by a few tens of clicks in the IDE, you're far more likely to do so compared to writing each one of those classes, functions, or even the instructions in them manually.

It's a form of cargo-cult thinking combined with the fact that a lot of beginning programmers are exposed to the "abstraction is good" dogma without ever realising that it can turn into too much of a good thing. Vague statements like "single responsibility principle" (what is a "single responsibility"?) can encourage a ridiculous amount of over-refactoring, driven by the idea that shorter functions are better --- which is true up to a point, but I've seen it taken to extremes where more lines in the file are function declarations and their opening/closing braces than actual, purposeful, code. They almost always argue that a shorter function is better ("after all, isn't it easier to read 1 small line than 10?") but neglect to realise that they've actually increased complexity of the whole by forcing readers to go through deeply nested function call stacks. Meanwhile, names tend to get longer due to the specificity of these tiny pieces, which doesn't help much with readability. In some very extreme cases I've seen, the name ends up being longer than the code it describes.[1]

Instead, I think we should be teaching abstraction as a tool like any other --- use it when it helps reduce complexity by reducing code duplication, and warn against its overuse. Under-abstracted code is tedious and repetitive, but reasonably straightforward to understand, since most people tend to find reading things linearly to be quite easy. Over-abstracted code is highly nonlinear and takes you through an elaborate maze-like path.

[1] http://git.eclipse.org/c/aspectj/org.aspectj.git/tree/org.as...


Objects were never intended to run in shared space. The only real OOP done today by the original definition is your interaction with your DB via SQL.

So now we jump trough hoops to isolate things that are extremely resistant to isolation.


Objects were never meant to have public attributes. The only way of interacting with objects should be sending messages. I'm talking about what Alan Kay says about the matter.

Incidentally, besides Smalltalk, there is another language, which implements this original idea of OO: it's Erlang (just view processes as objects and it all fits).


abstraction means thinking in layers, not having no idea what happens in lower layers.

The pattern you describe is caused by OOP and inevitable in many of today's OO systems because OO does not offer sufficiently powerful abstraction capabilities [1]. Imperative often suffers from "We won't attempt to abstract this at all" (which is sometimes better [2]) and functional sometimes suffers from "powerful abstraction capability which requires a high degree of knowledge to wield". Pick where you want to be in the spectrum.

[1] See AbstractSingletonProxyFactoryBean in Spring. Spring is an expert team with as well reasoned architecture as is possible in Java, anti-abstractions like this get written because they are the least-bad solution when a shitty problem comes up and the language has you cornered. http://docs.spring.io/spring/docs/2.5.x/api/org/springframew...

[2] "C vs C++ linus torvalds" https://www.google.com/search?q=c%20vs%20c%2B%2B%20linus%20t...


> "[2] "C vs C++ linus torvalds"

Linus is a great programmer and in the kernel space I would listen very closely to what he has to say - minus the profanity and insults. Outside of that space he is as subject to cognitive bias as any other well respected authority. There are many respected programmers that will take the opposite view in the C vs C++ debate.


I think it's also important to keep in mind the difference between too much abstraction and choosing poor/wrong abstractions.


I wouldn't call it "intuition" exactly, but more of a "desire to avoid responsibility" --- I've worked with the programmers he describes, the ones who will spend literally hours searching for a library/library function to do some trivial task when he could've written it in tens of minutes at most. The functionality of the one that gets chosen invariably remains 99% unused, adding bloat and making the dependencies more complex. Software ecosystems with many large framework-ish standard libraries tend to encourage this behaviour more.

we should be careful to not enable the belief that programming should be as easy as gluing things together

Absolutely agree, although a lot of it is.


On the other hand I have seen many functions that were written in 10 minutes that are utterly broken except for the most common cases. Where a library function should have been used because the programmer did not understand the depth of the problem.


Yeah, being able to distinguish these cases is an important skill.

One case I learned the hard way (once!) is parsing/writing CSV files. It seems so simple. But there are 3-5 special cases that are hard and rare enough that you always should use the standard library written for your language.

Perhaps admitting your original decision didn't work out and and starting over is the real important skill...


Parsing standard CSV isn't that hard (I've written one) when you have a spec to follow:

https://tools.ietf.org/html/rfc4180

But in my experience, a lot of generated CSV is non-standard, and in that case finding a library to do it doesn't help --- you need to write your own parser so you can adapt it to the quirks of the variant(s) you're consuming.


Although cleaning filters and a standard parser works well too.


Hmm, I'm not sure I like this. It would ultimately be an assessment of your own ignorance of a subject.

Maybe coding based on researching solutions is a good way to do things - we just need to make it more efficient.

Looking for an existing implementation, when a "10m implementation will do" - How do you know it will do if you've not researched more? When do you stop?

I think risk-assessment should be more of a thing in software development. We can't really know where the bugs might be, but we can know where we'd like them the least?


It's easier to fix those than extract a dependency or remove bloat.


Not when those ten minute functions add up and become a bloated dependency it isn't.


Yep, exactly. 80% of a feature may take 20% of the time to develop, but the rest (edge cases) will take the majority of the time, and that's time you could have spent working on core features for your system. That's why it's almost always better to use an existing library: chances are most of the bugs will already have been encountered and squashed.

I find the whole "do it yourself" mentality to be a form of NIH syndrome.


I often found that your remaining 20% is not the same 20% as the library authors, unless the task at hand is very constrained and standardized. Everybody has different edge cases. Finding the bugs relevant to your application in someone elses code is much harder than finding it in your own.


My code turns into "someone elses" code after a mere few years, and my coworkers code is already by definition "someone elses" code, in my experience. Replacing "our" code with a pre-existing library has been a great way to flush out bugs both in our replaced code, and in our calling code, as well.


If we're talking about newbies I don't think they consciously try to avoid responsibility.


If you cant find a suitable library in 30 mins then there probably isn't one.

This is why perl and CPAN is my goto language.


Edsger Dijkstra's work was almost entirely oriented towards breaking programmer intuition, emphasizing that we should reason /as/ the machine and eventually /ahead/ of the machine. http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1036.PDF


Wow, he has interesting handwriting.


Yep, you can even get it as a font.[1]

[1] http://www.fontpalace.com/font-details/Dijkstra/


Dijkstra's work is definitely an inspiration. The only thing I disagree with his views is that using analogies and metaphors in programming is always bad.


Thank you very much for sharing this.

Edit: It always feels as if Dijkstra had been a true 50/50 growth/scarcity mindset kind of guy, in any case it's a very worthwhile read I think so I've re-shared your article over here:

https://news.ycombinator.com/edit?id=10837146


> 50/50 growth/scarcity What exactly do you mean by this, how does Dijkstra represent it and where can I learn more?


I used to refer to this as "ritual-taboo programming", where people know what rituals work, and which to avoid, but don't understand why. This comes from reading books which are all examples, but don't provide reference documentation. I used to consider this a vice; now, it's normal. It's necessary to deal with the flood of sort-of-working software components one has to deal with today.

Dijkstra was once the head of the "programming is hard and programmers should suffer" school of thought. Few people read his books today.


Dijkstra would agree that programming is hard, but not that those who program should suffer. Quite the contrary, I'd like to add, as he would often point out that programmers currently suffer because of their leaky abstractions and continuous bug fixing.

He wanted to bring programming back to mathematical fundamentals, so it is simple to reason about. It is my personal opinion that this ethos is the drive behind the renaissance of functional programming.

Programming is hard. I've been programming for 25 years now and if anything, it never gets easier. To find the right abstraction, so only the correct things become simple, is one of the hardest tasks.


mathematical fundamentals

Unfortunately, a lot of real-world programming doesn't have mathematical fundamentals and is instead what I call "information bureaucracy". Some data items go in one side of the system; we sort, collate, bend, fold, spindle, mutilate them and they come out the other. Also called "business logic", although it is often infuriatingly illogical and crammed full of exceptions to obtuse rules.


There's honestly nothing less mathematical about those domains. The structures of rules and exceptions are mathematical. Business Prolog ideas like Drools are one attempt to systematize this, but others could exist.

Typically business rules systems are an exercise in DSL design since you need to construct interesting combinations and programs within a very novel domain. Doing this is rather mathematical!


>He wanted to bring programming back to mathematical fundamentals, so it is simple to reason about.

This is fine for academic CS, but can become a unicorn hunt - of the worst kind - for real world applications.

>To find the right abstraction, so only the correct things become simple, is one of the hardest tasks.

This is very true, and underappreciated in CS. The problem is that every language has its own set of ready-made abstractions, libraries offer further abstractions, and philosophies - OOP, design patterns, functional programming - offer a further set.

For any given problem, all of these may be non-optimal.

CS could maybe spend the next fifty years working on a useful theory of abstraction design, instead of trying to bake assumptions about how abstractions are supposed to work into the tools we use.

I'd love to see a meta-language which could be used to reason about impedance matching between abstractions and problem sets. I'm not sure such a thing is possible - Haskell is some way along, kind of - but it would be an improvement on languages that make it hard to design new abstractions because they come with rigid assumptions about the kinds of operations that are possible with code and data.

You could argue no restrictions exist in Turing complete languages. But for many problems the limit isn't Turing computability, it's how easy it is to find/invent expressive abstractions that help make hard problems more tractable.


FP is not "easier to reason about" (for humans; it is easier for machines). It's getting popular because it promises composability and reusability: the very thing OOP promised and failed to deliver. FP will fail, too - there's no silver bullet! - and then we'll move to something else.


Yet composability is exactly what makes it easier for humans to reason about. It's the small surface of a composable part that fits into more immediate representations in our mind.

Whether or not it's natural mapping functions to and from concepts people might already operate with isn't the same problem. I'd argue that one can be offset with practice, though it's certainly not something that is easy for everyone. In that sense, FP might be a bit more immune to bad intuition, if only that you have to give up on mapping structure to insufficient analogs. The late object oriented point of view makes this step trivial for our minds and is thus a significantly complicated trade-off to quantify.


You know, in my one big project on Haskell I choose it for performance alone. I didn't even think I needed any other of its features at the beginning.

Anyway, FP people are nailing composability and reusability to never seen level just in front of your eyes. You just have to keep them open to see. OOP did it at its time too, it just hit a ceiling; but there's one reason every imperative language is OOP nowadays.


>It's getting popular because it promises composability and reusability: the very thing OOP promised and failed to deliver.

The unfounded anti-OOP statements on this site are so ridiculous. That is entirely untrue.


FP is also getting popular because we hit the ceiling on Moore's law. Functional code is by default parallelizable (it takes effort to make it not so), which means it's a good selling point for the world in which we add more and more processors instead of increasing their speed.


The experiment referenced about disfluency has repeatedly failed to replicate. Meta-analysis here: https://www.academia.edu/11918605/Disfluent_Fonts_Don_t_Help...


I wonder how much "nice" IDE features like auto-completion and auto-refactoring go towards encouraging an "intuitive" mode of thinking. I've definitely noticed that when I work on Java or C# code in vim, I have to think about it much harder than when I work on the same code in IntelliJ or Visual Studio. The lack of automated refactoring forces me to slow down and think about what I'm doing, and reason about all the implications of my change, rather than just clicking some buttons and letting the IDE take care of the rest.

That said, while I'd like to use as my sole development tool, I'm not sure I'd be able to. The (proprietary) framework I'm forced to use is basically impossible to navigate without some kind of IDE support. Yeah, it's not ideal, but it's the state of the world I have to deal with.


Although his actual point had almost nothing to do with intuition, I still would like to argue in favor of intuition. Maybe not while writing code, but I find intuition very powerful when reading, reviewing, and refactoring code.

This is what people often talk about as code smells. When you've trained yourself in how good code looks, then bad code stands out to you. Once intuition has led you towards where attention is needed, then analysis can begin to devise the best solution. I find this to be a critical technique and tell people all the time to train their intuition. It's a tool like any other, to be honed and applied with skill.


I have become very framework-adverse over the past few years as I've switched from corporate programmer to startup programmer, and from OOP programmer to FP programmer. Frameworks, especially OOP frameworks, tend to throw you into a universe of nouns and verbs that can be quite complex and completely of the author's making.

Everything works great -- until it doesn't. And then you're spending your valuable time and brainpower learning about some framework instead of about how to do the job you're trying to do. That's almost always a fail. As the author mentions, it just gets worse: many times you're suckered into thinking the answer for your problem is yet another plug-in or framework. Now you have even more stuff to keep track of.

OOP forces you to answer the question: what does this system represent? Once you do that, then you're left with several metric tons of how to represent it well, which can lead to a _lot_ of stuff. FP, on the other hand, forces you to answer the question: what, exactly, do you want the system to do?

As it turns out, unless you're strictly happy-path, the shortest way to an answer is just to do the work. (Of course, nobody is saying to throw out core libraries and start writing in machine code. But if you're bleeding to death from paper cuts on WhizBang 4.0 that you just downloaded last week, you screwed up.)


Esp. when WhizBang 4.0 is a complete rewrite of the codebase so throw out everything you know about WhizBang 3.0.


And your existing code already has workarounds for 3.0 which are invalidated by the upgrade.


Don't worry about those workarounds. They're on features that were deprecated. =)


I find that I use OOP to model what I want my system to do rather than naive OOP of "I have a square and a circle". Unfortunately, this invariably leads me to Greenspun's tenth rule.


It's an interesting idea and one that I find to be true, but also complicated.

We're also trained as programmers not to reinvent wheels poorly. Or not install a library to do something that is already supported by an existing library you're already using. When you need to do something like, say, open a new browser window, it's not that surprising to find a lot of posts about how to do it with jquery.

I don't find it always bad. As long as you understand that you are using a library, it's ok to go searching for its edges.


some of the pitfalls to intuition echo the side effects of programming by coincidence, outlined in pragmatic programmer https://pragprog.com/the-pragmatic-programmer/extracts/coinc...

> Don’t code blindfolded. Attempting to build an application you don’t fully understand, or to use a technology you aren’t familiar with, is an invitation to be misled by coincidences.


What a convoluted way of saying: "use frameworks and libraries, but not all the time." Yeah, we get it, some people use libraries too much. Some people reinvent the wheel too much. It's not too hard to understand that judgment is key when deciding on one over the other.

Did I miss something, other than the attempt to coin the phrases "framework negative space" and "framework intuitive space"?


> not too hard to understand that judgement is key

The state of most commercial codebases and a lot of colleagues I've worked with, and my own code as well show that it is indeed that hard, I'd like to argue. It's easy to state, difficult to do well.


The idea behind the ultimate user interface is for programming labguages to dissolve over time leaving us with a natural conversation between man and machine (and at some point the distinctions betweem man and machine will dissolve)

While I understand what he and others have said, which is what I had believed in the past, I think that the emphasis to think like a machine is heavily misguided as we as users need to steer programming languages toward their natural end (pun intended) which is where AI is headed.

But we're miles away from that so the argument he puts forward is temporarily valid.


I'm not sure this is a great idea. "Natural" communication is incredibly inefficient and lossy. Note that academics communicate via meticulously written papers, often involving very formal language. Business processes are structured in worksheets, graphs, etc. What we seem to yearn for is the ability for a computer to transcend our inability to express ourselves. I am not so sure that is where we are(or should be!) headed. There is a certain internal clarity when you manage to formalize your thoughts and arguments, to test them for logical inconsistency and errors. Programming languages are an inherent helper in that regard.

I think the biggest hurdle for programming languages currently is flow control. Specifically, programming languages are mostly still written top-down, leaving the implicit assumption that code is executed in the order you read it in. This is obviously a false impression for pretty much any program written nowadays. Inventing a new way to present code so that program flow is easier to visualize, is in my opinion the "next big thing" in programming language UI. This is something tools and concepts like UML have tried and failed to revolutionize, so I'm not sure what the way forward really is.


Well, those are some deep thoughts. I appreciate the response:)


I have been thinking about this since last night, and here are my thoughts. I totally agree with Amjad on avoiding framing programming as an intuitive activity. But on the other hand, framing it as a hard intellectual activity for beginners is also troublesome. I am facing this now as I am helping my younger brother in his Computer Science B.Sc. program, and taking different isolated courses for few months two or three times a year makes academic/intellectual studying for this industry without experience a hassle. I keep listening to him and one of the most common complains he says is: "I don't know how all these - OS, Automata, Java, Web, Algorithms and Math - pieces fit together."

I think I can relate to this frustration/confusion because I went through it when I was obtaining my B.Sc. degree but I cannot exactly remember nor reproduce how I overcame this phase, or made sense of "it" all (for lack of a better term).

Can you folks help me with this? How would you guide or help a beginner like this person in programming?


> I get a lot of questions from aspiring programmers on what’s the best tool or languages to learn. It’s almost always a premature question to ask.

I often think about how to answer this question to beginners as well. I think he is right, the answer is probably "it doesn't matter".


I think a good way to handle that is to ask them what they want to end up building. Want to make Android apps? Java. Want to make the front end part of web pages? HTML, CSS, JS with some preprocessors and libraries. Want to work on the back end? Ruby or Javascript/Node or one of the others.

Treehouse's learning tracks (https://teamtreehouse.com/tracks) are useful for a beginner to look through to get a general idea of what that would involve.


Explain that to HR drones.

Tooling matters a lot if you want to be doing this for a living.


Every now and then spend a whole weekend (or just a week of after-work hours) learning the new hot X to the point you can honestly say you know something about it (it doesn't take that much for an experienced coder to learn a new thing) and bam, you have another buzzword for your CV. Select as needed to get through HR filters.


Still not the same as working experience. And honestly I'd rather get payed for tinkering around.

Resume Driven Development is where it's at. Docker? Our "modular monolith" works just fine, but with Microservices we can now rewrite everything in Rust and Elixir. Lock ourselves with yet another build / automation tool just to remove trailing whitespace. Add RethinkDB and Redis and that new graph database now that we're at it. Let's pray for updated NixOS binaries!

For toy projects I'll stick to OpenBSD, Perl and ancient tools like make, awk or even rc.


Which is great, if the HR filter is simply "does X appear on the resume". It it is "does X appear on previous job experience" you are still screwed.


I don't have many resumes out there today, but I've never let HR people know if any X belonged to what previous experience, or home project or what.

If they wanted to know, they'd have to ask me. Very few times, they did.


I think this is the reason why "functional" programming results in fewer bugs ... or that you need to be really smart to do it. Or because you most of the time is refactoring the program (from an imperative language).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: