Hacker News new | past | comments | ask | show | jobs | submit login
The danger of "language" in programming (2009) (loup-vaillant.fr)
34 points by phowat on Dec 27, 2013 | hide | past | favorite | 56 comments



I think it's worth bringing Perl into this discussion, as a language designed by linguists to to make it easier to express yourself in ways similar to how you think. Thinking about this, it puts the following statement into new light:

Natural languages are exclusively used to talk to people, while programming languages are also used to talk to machines.

We also use programming languages to communicate to other programmers (which includes our future selves). Making clear delineations as above can serve to help define a problem, or arbitrarily reduce your options without cause. I think this is the latter.

Perl has many, many nuances. This is often derided by people that prefer more rigid languages. I like to think that it helps me see the programmer intent and thought process more clearly through the code.

Obviously there are downsides to this, just as there are downsides to overly rigid languages. Again, differentiating them based on what we assume they will excel at early may not be beneficial for us. People that move freely between different language dichotomies seem to have success in choosing the right tool for the job, whether that be algorithm, language or platform.


The nuances of a given spoken language take years to master even with regular use. They are used for brevity, cultural identification, conveying meanings that are often different from the dictionary definitions, many other things that programmers don't need to do. Much lawyer and engineering jargon is motivated behind removing these nuances.

You don't need nuances for a language to be expressive. Haskell is a formally rigid language that can express many things in a billion ways. Python is a simple language that accomplishes the same effect. Clojure has almost no syntax but is more expressive than almost any language I've seen. I don't want nuance in my programming languages. Nuance is how you get write-once source files and migrane-inducing bugs.


I don't want nuance in my programming languages.

Does your programming language distinguish between period, semicolon, and colon? Are = and == separate operators? Is whitespace sometimes insignificant and other times significant?


Distinctness isn't a synonym for nuance. All programming languages that I'm aware of also distinguish between "toast" and "taste", or "4567" and "4587."


Maybe so, but some programming languages have case insensitive keywords or identifiers, or take into account only the first several letters of a keyword or identifier. Distinctness is nuanced too.


Distinguishing lexical tokens is a red herring. Nuance has more to do with giving multiple meanings to tokens, depending on context, like maybe if the colon also worked as a statement terminator, but with slightly different semantics.


So, you're saying:

> Is whitespace sometimes insignificant and other times significant?

(Yes, there are languages where whitespace is ALWAYS insiginificant. Well, that is unless you tell the parser to treat it as significant. Or a construct in the parser temporarily enables significance of whitespace.)


> We also use programming languages to communicate to other programmers (which includes our future selves).

Communication using natural language is fundamentally different from communication using programming languages. The ability to be vague, redundant and even wrong - all of these are features in natural languages, but bugs in programming languages.

> Perl has many, many nuances. This is often derided by people that prefer more rigid languages. I like to think that it helps me see the programmer intent and thought process more clearly through the code.

Nuances obscure intent, because they require you to be actively aware of the possibility of their existence in order to understand code.

> Obviously there are downsides to this, just as there are downsides to overly rigid languages.

I see no downsides whatsoever to reducing the extent to which one can be wrong.


> I see no downsides whatsoever to reducing the extent to which one can be wrong.

Then you need more creativity. :)

Nuances make it possible to express extremely complicated (yet often used) concepts in a very concise manner that is still extremely clear. See for example Python Decorators [1].

Very rigid languages take less effort in learning, but at the same time can require the developer to spend a lot more time and effort expressing certain things. See Java.

[1] http://simeonfranklin.com/blog/2012/jul/1/python-decorators-... (Only 12, imagine that!)


> Very rigid languages take less effort in learning, but at the same time can require the developer to spend a lot more time and effort expressing certain things. See Java.

I do not think of Java as a language with few nuances. Null references, broken covariance for arrays, two non-orthogonal notions of modularity (class-based: public, protected and private; package-based: default visibility), value semantics for primitives vs. reference semantics for everything else... it is all very nuanced! Plus, for all the supposed rigidity, you can break type safety via reflection.

On the other hand, Haskell and Standard ML (especially the latter!) strike me as very simple languages, with a far more rigid notion of safety than Java programmers could ever dream of, but which nevertheless afford lots of expressivity. Far more than either Python or Java.


Good answer. :)

I'd like to point out that nuance density is dependant not only on the language as a whole, but also on the area of a language you're looking at. Compared with Perl's Moo/se Java's object system is ridiculously small and simple, which is corroborated by things that are extremely simple in Perl OO taking pages upon pages of code in Java.

As for Haskell, do consider that while the base of it is quite simple, just like Lisp; it also has the massive ball-of-wax that is monads, which people have been trying for years to explain simply. [1]

[1] (Though this is mostly because most people trying either don't have the required humbleness to admit they're a hotfix to a core failing of Haskell, or don't dare explain it in those terms.)


> As for Haskell, do consider that while the base of it is quite simple, just like Lisp

Lisp is only syntactically simple. (Admittedly, it is syntactically the simplest.) Semantically, it is still a mess.

> it also has the massive ball-of-wax that is monads, which people have been trying for years to explain simply. [1]

That is a weird thing to say. Monads are simple: an endofunctor "T : C -> C" with two natural transformations "pure : 1_C -> T" and "join : T^2 -> T", satisfying three coherence laws that basically say "the Kleisli construction yields a category". Of course, explaining monads in terms of "bind" instead of "join" is bound (pun not intended) to result in a huge amount of fail.

> [1] (Though this is mostly because most people trying either don't have the required humbleness to admit they're a hotfix to a core failing of Haskell, or don't dare explain it in those terms.)

It is not a hotfix. It is a feature. Haskell's segregation of effects makes it possible to reason about effects in a compositional manner, using equational reasoning.


That's a simple description, not a simple explanation, and yes there's a difference. An explanation has the additional burden of being easy to understand, which your "simple" explanation is not unless you already have a background in category theory or other relevant experience. What's an endofunctor? What's a "natural" transformation? Is it something more specific than "just a transformation"? What in tarnation is a Kliesli construction? I'm sure you can give good answers to all these questions, but at that point your explanation is neither simple nor easy.

I'm not saying they're bad, I'm saying they're hard, and your pitch needs to be that they're worth the effort, not "come on, they're not that hard". Until I saw your reply to your other reply, I truly thought this was a joke. In fact, the "monoid in the category of endofunctors" "explanation" is a classic joke about haskellites.

edit: typo


Do not conflate objective mathematical simplicity, https://news.ycombinator.com/item?id=6972986, with subjective easiness, which depends on your prior experience.


I'm not. That's the distinction I spent my whole post making.


> Monads are simple: ...

Ahahaha, please tell me that was meant to be comedy and that you're actually aware of the simple explanation. :D


My understanding of the notion of "simple" is based on the following principles:

1. Short definitions are preferable to long ones.

2. Reusable generic definitions are preferable to overspecific ones.

3. Case analysis should be kept to the bare minimum necessary.

The notion of "monad" fits these principles perfectly:

1. "A monad is a monoid in the category of endofunctors." Short and to the point.

2. You cannot possibly get anything more reusable and generic than category theory. (Contrast with "instanceof" and reflection breaking type safety, and essentially depending on luck and the stars being aligned in order to work.)

3. There is no case analysis whatsoever in the definition. (Contrast with: "if a pointer is invalid, dereferencing it is undefined behavior, otherwise...", "if a downcast is invalid, performing it will result in a ClassCastException being thrown, otherwise...")

Note that my understanding of "simple" actually encourages abstraction (for the benefit of genericity), rather than dissuade it. Abstraction might make things less "easy" (this is subjective, though), but in no way does it make things less "simple" (this is objective).


I literally cannot tell whether you're still being funny or serious. Poe's law is in full effect. (It's still pretty funny to me either way.)

That said, try:

Haskell tries to be a language where all code only does this: Take input, produce output from it; whenever input is the same, output needs to be the same, nothing else may happen, no exceptions whatsoever. Since this forbids things like printing to the screen, reading from a network connections and other useful things, there needed to be a single construct that is excempt from these rules, so Haskell can be useful. Monads are these constructs.

Monads are the house rules you bring to your Monopoly game to make it fun.

(Yes, that means Haskell is not a fully functional language, it's just more functional than most.)


> I literally cannot tell whether you're still being funny or serious. Poe's law is in full effect. (It's still pretty funny to me either way.)

I am dead serious.

> Haskell tries to be a language where all code only does this: Take input, produce output from it; whenever input is the same, output needs to be the same, nothing else may happen, no exceptions whatsoever. Since this forbids things like printing to the screen, reading from a network connections and other useful things, there needed to be a single construct that is excempt from these rules, so Haskell can be useful. Monads are these constructs.

Stop conflating monads with IO. Monads just happen to be usable for modeling IO, but they can model other things as well.

> Monads are the house rules you bring to your Monopoly game to make it fun.

Ironically, when I program in Haskell, I try to keep as much stuff outside of IO as possible. The reason is precisely that IO is usually not fun.

> (Yes, that means Haskell is not a fully functional language, it's just more functional than most.)

No, it just means that IO is a DSL for constructing imperative programs functionally.

===

Anyway, I have no desire for being trolled, so this discussion ends here.


>> things like printing

> Stop conflating monads with IO.

> I have no desire for being trolled

Wow, that was a clever troll, didn't catch on until the end. Would've been better if you hadn't ended it on an obvious declaration of intent though. :)

--

Edit: In retrospect and for later readers i guess i should point out that i forgot one house rule Haskell brings along: Any function can only ever take one single argument. Some monads make it possible to bunch multiple values into one. So the monopoly analogy above is still perfectly accurate.


Taking multiple arguments has nothing to do with Monads. You can either take in a tuple of arguments

    f (x,y,z) = x*y + z
or take them in curried form

    f x y z = x*y + z
where f 3 is a single argument function that returns another function. This ends up being the same as functions having multiple arguments, in practice.


As someone who has a passing knowledge of perl, what do you mean by nuances? It always seemed that perl the language is pretty minimal.


No. Perl has a lot of syntax. There is a lot of language containing a lot of nuance. Python is an example of a language with moderate syntax, probably several times smaller than Perl 5's, assuming Perl is about the same size as Ruby, which is a good assumption, given that they both depend on complex interactions between lexer, parser, and runtime to fully parse. Smalltalk is several times smaller than Python, and actual Lisp implementations have syntaxes that are 1/3rd to 1/2 the size of Smalltalk's.

The above is all objective. You can reduce a language to a formal grammar and count the number of terminals and nonterminals in that grammar.


Not all linguistic nuance is syntactic. For example, the semantics of object models can contain a lot of nuances, as can the lifetimes of variables, visibility of identifiers, etc. While there are ways of encoding semantics (denotational, axiomatic, etc.), the size of the encoding will depend a lot on how closely the language's semantics resemble the semantics of the encoding you choose.


Okay, then. Single letters in regular expression statements.


An interesting nuance, which many early Perl teaching books up until Modern Perl [1] failed to address properly is "context". Perl allows both single and multi-value function return. In return the function can ask whether the calling statement expects a single or multiple values. Additionally, this request for expectation propagates through the call stack from the first statement where this can be correctly determined, to the place where the question was asked.

This makes it possible to do things like this:

   my $first = max( 1, 2, 3, 4, 5 );

   my ( $first, $second, $third ) = max( 1, 2, 3, 4, 5 );
[1] http://onyxneon.com/books/modern_perl/


I disagree with this argument. Modern programming is done with groups of people; you are not only communicating your intent to the machine, you are communicating your intent with fellow engineers. It is in this case that nuance is important. For example, C++ references and pointers are exactly the same as far as the machine cares. But the difference is important for communicating intent to other engineers.

This isn't to say that C++ lacks significant flaws, but I don't think nuance and variety of expression is one of them.


Out of curiosity: what intent would you say is communicated by choosing references over pointers, or vice versa? I'm relatively new to C++, but I haven't noticed any real difference beyond personal preference.


If a function takes a pointer parameter, then you have to think about if it is pointing to one object or an array.

If it is passed by reference I'd be less inclined (personal opinion) to think the reference is stored somewhere, while a pointer argument might be saved somewhere for some time.

A function invocation with reference looks the same as a function invocation with value, so on a quick glance, someone might think that the parameter will remain unchanged.

It is easy to convert a pass by value to a pass by reference (just put const &, vs changing all dots to arrows with pointers) to speed up a slow copy of a large object, so I may get the impression that that was the reason for using a reference when reading the code.


Pointers can be null references can not, therefore if you use a reference then you are communicating nulls not allowed. You can't new or delete with a reference, so you are saying something about the lifetime of the object referenced(That this code absolutely refuses to manage it).


This is only partially true. A C++ reference can outlive the object it references. A better argument would be Rust's borrowed pointers, which are statically checked to never point to an invalid object throughout their entire lifetimes.

To talk about lifetimes, you need linear logic in the core language. Merely having destructors is not enough.


This is only partially true. A C++ reference can outlive the object it references.

I didn't mean to imply that it couldn't, just that by using a reference instead of a pointer you are disclaiming responsibility for lifetime management of the referenced item since you can not delete a reference.

C++ being what it is, one can ref = *(new object());, delete &ref;, but the intention you are expressing by using a reference is more hands off in nature. It isn't retargetable, no built-in ability to new\delete, etc.


Manually making sure a reference does not outlive the referenced object (or, at least, that a reference is not used after the object has been destroyed) looks pretty much like "lifetime management" to me. What C++ affords you is merely ownership management.


You are also indicating that the reference is not meant to be "reseated", i.e. that you aren't going to swap the object at the end of it for another, like you could do with a pointer.


Another thing that's easy to forget is that human languages often evolve to be difficult to learn, with this difficulty later being used as a social weapon against outsiders and a marker of status. This evolution is at times deliberate, other times accidental, but all languages go through this phase - where there's no such difficulty, insiders invent jargons and slangs to create the barrier.

Though now I think of it, programming languages aren't immune from this phenomenon either!


Another danger is the focus on the syntax or maybe standard library when talking about "programming language". Programming is done in an environment. Things like editor support, debugging capabilities, compatibility with operating systems, speed of execution... they all count towards the usefulness on a programming environment and are often overlooked when comparing fizzbuzz implementations.


> There are two ways to deal with bugs. Correcting them, and avoiding them.

Does anyone on HN know of other places to read more about this idea? I strongly believe it, and think it's incredibly important to being able to argue for refactoring and code quality. I'd would love to read more about it if there are good articles out there.


I can mostly answer this from the perspective of a Perl developer, but maybe this'll show you some ways to go on reading. The things i know as most important are:

   Testing
Shortly put: Ever wrote software? Ever went and tested a feature manually by running the executable or clicking in the web browser? Now imagine you never had to do a manual test more than once because you put any manual test you did into a little program that runs it for you and which knows how to output and summarize the results. Suddenly you can write more code because you spend less time testing your software and you test it more because you simply need to run "make test" to run ALL the little programs that test your features. See: https://metacpan.org/pod/Test::More

   Statical Analysis
You think you write clean code? You think you don't use anti-patterns? You sure? I know i'm not sure. Now how about running a program that will take apart your source code and run it through hundreds of little rule checks contributed by hundreds of people which will cruelly and mercilessly point out where YOU FUCKED UP. See: https://metacpan.org/release/Perl-Critic


The comparison made between English and C++ (spoken/written language and programming language) makes me wonder if people should even be comparing the two at at all. They do serve similar purposes in a sense, but what I'm wondering is if we would even compare them if they both weren't called "languages"? If the words used for programming language and written/spoken languages were completely disjoint, would this point even be brought up?


"It's possible to program a computer in English. It's also possible to make an airplane controlled by reins and spurs." - John McCarthy.


Language Theory is a subset of automata theory that covers both. So at least in some sense, they are comparable. We can place them in Chomsky's Hierarchy and compare the constructs that are meaningful in both.


They share the same node, language, so I don't see they could avoid comparison. In fact, programming language could be argued to be a specialization of spoken language. After all we're not punching in bytes by hand on control cards anymore.


Indeed, the blurring of this distinction became the means by which Larry Wall earned his college degree!

https://en.wikipedia.org/wiki/Larry_Wall#Education

He talks about it in more depth in this interview: http://youtu.be/aNAtbYSxzuA


There are two ways to deal with bugs. Correcting them, and avoiding them. ... Of these two, the only one that is significantly influenced by the structure of programming languages is avoidance.

This is an interesting statement to me, and one that I think may be telling towards the author's experience as a programmer. This is not my way of saying he is a poor or inexperienced programmer! I skimmed through a few other articles on the site and what stood out to me is that he is heavily "academic." Some of the best programmers I have worked with had deep academic experience and were quite adept at writing fast, concise, correct code in many different languages.

What stands out to me is the absence of something that (as is my personal observation) most experienced programmers in non-academic settings begin to realize at some point... While avoiding bugs (by, for example, choosing a language with strict type-checking, or immutable data/vars) is quite important, writing code that can be easily corrected is arguably just as important!

In regards to just the use of a programming language, this comes down to choosing a careful balance of constructs - syntax, features, naming, etc. Coding standards for a project or organization are not just there to stoke somebody's ego or cater to the "lowest common denominator". Well-chosen standards and conventions are there to simultaneously avoid bugs and make code easier to debug and maintain - which is what one must do to make corrections!

It is therefore crucial to insist on it when choosing (or designing) one.

This sentence is what really drove me to comment: No programming language can prevent all bugs. Not even most bugs.

In practice, I have even found that constructs and limitations that are intended to prevent bugs of one type can lead to bugs of some other type. In the worst cases, these limitations can lead to programmers making code that is much more verbose and complicated, which, of course... leads to more bugs that are harder to correct.

IMO, one should choose a language based on many, many more criteria than "avoidance of bugs." Personally, one of my top criteria is to choose a language with which those who will write and maintain the software (now and in the future) are going to be most productive.


> writing code that can be easily corrected is arguably just as important!

The easiest to correct errors are the ones that are detected as early as possible.

> No programming language can prevent all bugs. Not even most bugs.

Sure. No programming language can prevent you from misunderstanding a specification. But some languages can prevent you from missing out corner cases in your case analysis, or from redundantly writing overspecific code that works in essentially the same way for a wide range of data types.

> In practice, I have even found that constructs and limitations that are intended to prevent bugs of one type can lead to bugs of some other type.

I have no idea what you are going on about. What kind of bugs do features like algebraic data types (Haskell, ML), smart pointers (Rust, to a lesser degree C++), effect segregation (Haskell) and module systems (ML, Rust) lead to? I can only see the bugs they prevent.

Normally, the kind of feature that "leads to bugs of some other type" does not try to prevent bugs in first place, just mitigate their consequences. For example, bounds-checked array indexing does not try to prevent programmers from using wrong array indices, it just turns what would be a segfault into an IndexOutOfRangeException.

> Personally, one of my top criteria is to choose a language with which those who will write and maintain the software (now and in the future) are going to be most productive.

I see writing buggy code as negative productivity. So a language that gives you the illusion that you are writing correct code, when you in fact are not, actually makes you less productive.


Probably my favourite open-ended interview topic for programmers is asking them to rank various properties code might have in order of importance, and explain why they chose the order they did.

For example, one possible list of properties might be conciseness, correctness, documentation, efficiency, maintainability, portability, readability, and testability.

Often, I can learn a great deal about what sort of person I’m talking to just by watching them define their terms, decide what assumptions they think are necessary, and then reason through the resulting dependencies.

I get the feeling that the parent posters (hercynium and catnaroek) might argue for quite different orders, but both with good reasons.


Over the years I've come to the conviction that the 2 most important properties of programs are simply 1. Correctness 2. Maintainability

A program that does not do what it's supposed to is of little value. This is a relative metric however, a program can do many valuable things right, yet have a few bugs.

But once a program does what we want, what else do we want from it? We want the ability to change it easily, so it can do even more things for us.

Maintainability is also a relative metric, and even harder to quantify than "correctness". However when looking at two ways of writing a specific part of a program, it is often easy to say which produces a more maintainable solution.


Many good candidates seem to anchor on those two properties (correctness and maintainability) as a starting point. More generally, they tend to identify that some of the properties are directly desirable right now, while others have no immediate value in themselves but are necessary to ensure that you can still have the directly desirable properties later. Which ones take priority under which conditions can be an enlightening conversation, often leading to related ideas like technical debt, up-front vs. evolutionary design, and so on.


Bingo!


Our backgrounds and experiences may well be very different, leading to starkly different opinions - but, well... that's programming!

Though I don't know Haskell, ML, or Rust, I do know Java, Clojure, C++, and a few other languages quite well.

In a nutshell, I like type-checking and smart-pointers, and module systems and such. They often save me lots of trouble! But not all the time. Sometimes, these features get in the way - either to making the code more flexible, or to making the code more readable. (casts dirty look at Java)

But at the end of the day, no matter how precisely one can communicate to the compiler, and no matter how good one's compiler is at detecting (or even correcting) inconsistency, flawed logic, corner-cases, over-specificity and under-specificity (or the reverse of genericity, take your pick)... We puny-brained humans still manage to screw things up. The compiler can't read the spec, never mind actually cognitively understand the problem trying to be solved... so it's up to us to attempt to translate.

And fill in the inevitable gaps.

And attempt to anticipate future needs.

And finish on-time.

With our pitiful, buggy, meat-based processing organs.

Sure, "readability", "maintainability", and "understandability" are all pretty subjective, but that's part of the point. With the languages I've used, I find that sometimes the "safety mechanisms" get in the way. Working around them with "design patterns" tends to add code that might not otherwise be necessary. And might contain more bugs. I'm sure I'm missing something life-changing by not knowing Haskell or ML, but more important to me is that the code I write be, well... "readable", "maintainable", and "understandable" by whomever might be working on it (fixing, extending, whatever) next!

'Cause experience has shown me that bugs happen. And I want finding and fixing them to hurt as little as possible - whether it be me or some other poor fool.

Oh... also, I couldn't help but have a snarky response to your last remark: (no offense intended :)

> I see writing buggy code as negative productivity. So a language that gives you the illusion that you are writing correct code, when you in fact are not, actually makes you less productive.

FTFY: I see writing buggy code as inevitable. So a language that gives you the illusion that you are writing correct code, (by sucessfully compiling) when you in fact are not, actually makes you less productive.


I got a syntax error somewhere on this line: "Not because it's wrong (although it is), but because it's right."

This is interesting however - how about substantiating it with a concrete example? "C++ does have many nuances. It is a very interesting and very subtle language, to the point even machines (namely compilers) disagree about its meaning."


Good you're not a computer then.


Isn't the original statement about C++ a variation on the Sapir-Whorf hypothesis ? http://en.wikipedia.org/wiki/Linguistic_relativity


Sapir-Whorf is a claim about the effect a language has on the cognition of those who speak it. i.e., the claim is about the cognition and not the language. The C++ statement was just a silly reason (IMO) for why C++ is a better tool than other languages for expressing to a computer what a person is thinking.


Let me add a short side-note from the 'Natural Language' side of the table.

TLDR: Comparing programming languages and human languages is a dangerous thing, not just because people differ from machines when being told to do something, but even more so because since the daily use of NL by humans is so fundamentally based on our biological and cognitive context, that if you really think about it, the parallels in functionality of these two types of 'language' are interesting to consider but greatly limiting.

Human language works something like this: Agent A wants or feels something. If this even in a slight way involves another agent (B), big chance that A will choose to communicate something to B.

Before deciding what to say, (among others) the following is considered:

- A knows that B shares a tremendous amount of similar information with her

- About most of this info, A knows that B knows that A knows this, therefore:

- A can expect B to infer anything that A would like B to infer from what she says.

Results:

- The code (language) used itself does not contain even 10% of the information necessary to 'understand' the situation and what A motives are for speaking. It merely contains lots of very multifaceted and nuanced pointers of which any two agents would disagree on what's got the priority. [1]

- A huge part of day-to-day communication is extra-linguistic.

Take this example:

You and me walk to the campus library together. I notice a certain bike and point your attention to it. Since you know that I know that it's your girlfriend's, you take it as me saying "hey! your GF's there too!". However, you two might've broken up. If I know this, I might point at it to say "maybe we'd better relocate, mate.". However, this totally depends on you knowing that I know (so that you know that I mean this and not the opposite), and me knowing that you know that I know (so I can assume that you will infer what I actually meant out of possible meanings).[2]

And this is just pointing. Image when we start using language to talk about the bike, or about other people and what they told us. Imagine all the management of meta-knowledge required.

Basically, we all do this on a daily basis. We have been trained from birth to make these kinds of considerations subconsciously in order to effectively communicate with others. And not just about what the other person knows. Also about what he expects, what kind of words he uses, what he is looking at, etc.

For more, read something like Tomasello's "Origins of Human Communication" to get started. The more you know about this, the more you notice it around you (and start using it to your advantage). Fun stuff!

[1] I believe this is one of the reasons Google Translate will keep sucking.

[2] above example also courtesy of Tomasello, but retold from memory, so forgive me if I've gotten too creative!


There's also often no one single language that a modern software system is expressed in. Even when there's ostensibly only one language used (Python, eg.) the total system behavior of a website is also typically expressed in SQL, sh, provisioning APIs, web server config, database config and possibly a variety of other DSLs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: