Hacker News new | past | comments | ask | show | jobs | submit login
Programming Languages Are Simply Not Powerful Enough (ivanjovanovic.com)
147 points by dylangs1030 on March 1, 2013 | hide | past | favorite | 139 comments



One simple example from everyday life: Imagine that you don’t have in your vocabulary notion of a sum. Therefore, if you would like to tell someone to do a sum of some things you would always have to express the process of summation in details instead of just telling and then you sum these numbers.

I hear this a lot to explain the idea of abstraction to people, but the trouble with applying it to software is that picking really good abstractions is very, very very hard. Abstractions evolve over a long time under pressure of peer review and evolution of software systems. It took us ~2000 years to get a solid foundation for mathematical concepts and notation (we understood sum way before that, but you get the idea). We don't have 2000 years to pick really good abstractions for software systems.

I'm convinced that a programming language must be really good at allowing you to change and evolve your programs as quickly as possible as your understanding of abstractions improves. This is far more important than more powerful ways to describe abstractions IMO. Functions are fine. Just let me change things drastically without breaking too much or having to rewrite half the darned thing, and I'll be happy.


The claim is often made that languages like Forth and Scheme are better for building applications than less elegant and orthogonal languages. But the evidence on the ground is overwhelmingly to the contrary. The vast majority of commercially significant apps are built in relatively baroque and ugly languages like Java and C++.

Personally I think that the ability to define your own language constructs is ultimately a lot less useful than things like rich libraries and robust tooling and I'm still waiting to see real-world proof of the advantages of programmable programming languages.


The popularity of programming languages is entirely a matter of historical accident. C flourished while Pascal languished because UNIX was written in C. Was there really a deep technical reason for preferring C to Pascal? No, not really. Indeed, for most of early history, Pascal had far superior tooling to C.

How else do you explain the popularity of languages like Javascript and Objective-C? Javascript is an objectively terrible language, hacked together by Brendan Eich in an afternoon. Yet, it's a tremendously popular language, and tremendous amounts of money and time have gone into building good tooling for it. Same thing with Objective-C. It's a shitty, ad-hoc, cobbled-together patchwork of C and Smalltalk, yet it's tremendously popular now because that's what the iPad is programmed in.


Pascal (or more precisely Borland's version of it) was my first programming language that I learned in school. I also used it when I was participating in the CS Olympiads.

Going from Pascal to C was like a breath of fresh air. Pascal is like a combination of everything I hate in all programming languages that I know and I'm glad that the world moved on. C is small, elegant and portable. It's a systems programming language that does what it's supposed to do.

> Javascript is an objectively terrible language

Our tastes are different and all proof to the contrary, as Javascript proved that it's an objectively good programming language, that has been burdened with legacy, incompatibilities due to the browser wars and more recently slow evolution due to its huge popularity and the stagnation of IExplorer. It has quirks and it isn't what it should have been, but considering the context it's the best outcome we could have had.

Name one other programming language that's (1) a true standard governed by a standards body with multiple platform-independent implementations and (2) has less problems than Javascript.

With Javascript your argument falls flat on its head too, because all of the competing technologies failed because of really good technical and political reasons ... lets not forget VBScript, Java Applets, Flash and Silverlight. Yes, technologically speaking Javascript was and is better than all of them.


There are a lot of problems (not merely "quirks", but outright problems) with JavaScript that have absolutely nothing to do with past or present browser-based implementations. They're problems that are inherent to the language itself.

I'm talking about things like its extremely broken equality operators, its extremely broken array handling (especially the behavior of the length property), semicolon insertion, its broken scoping, its lack of support for proper class-based OO, its lack of support for proper namespacing and modularity, its lack of proper typing, and so forth.

These are far worse, and in many cases much stupider, than any problems we see with C, C++, Java, C#, Python, Ruby, Erlang, Perl, Haskell, Scheme and whatever other mainstream or semi-mainstream language you want to choose.

PHP is perhaps the only widely-used language that exhibits the same kind of idiocy when it comes to many core features. But at least it has been moving in the right direction, with its much more sensible support for OO and namespaces, for instance. We keep hearing about how ECMAScript 6 will finally bring at least some sensibility, but this is still very much up in the air.

It's incorrect to say that JavaScript is an "objectively good programming language". It's objectively terrible, and all of the evidence proves this very conclusively.


Most of your peeves are addressed by correctness checkers like JSLint, or have been alleviated by community projects. There's a lot of hate here, so I'm going to address your points one by one:

> I'm talking about things like its extremely broken equality operators,

Don't use language features that are broken. Use === and !== instead. JSLint complains about this.

> its extremely broken array handling (especially the behavior of the length property),

You're going to have to elaborate here, I have no idea what you're talking about.

> semicolon insertion,

Put semicolons at the end of each line manually. Or don't use crazy multiline expressions. JSLint will flag this down for you. Other languages like C force you to do this, why the JavaScript hate?

> its broken scoping,

Never put things in the global scope. Always use var. Global-by-default is annoying, but very easy for static tools like JSLint to flag and nag about.

> its lack of support for proper class-based OO,

Javascript's objects are very minimal; they can easily emulate semantics of other OO languages. In my code I usually emulate the Python copy-superclass-on-class-creation semantics of subclassing.

> its lack of support for proper namespacing and modularity,

RequireJS has effectively solved this problem for me. As per above, you should never put things in global scope anyway.

> its lack of proper typing,

If you're looking for built-in static typing, you came to the wrong place. I think it's important to remember that JavaScript the language was designed to be very minimal (IIRC the entire initial language was created in a week); in that regard, it at the very least is not a good fit for static typing. I personally wish that it had stronger types, and can agree with you here that ideally this would be different.

> and so forth.

I guess I don't see your concerns here. I prefer writing JavaScript over C and Java; it allows me far more leeway for creating abstractions through closures and anonymous functions. Think of JavaScript's problem statement: create a simple minimal universal browser-based language. It's hard to imagine a better language in JavaScript's place (Scheme is the only realistic alternative I can think of). Here's a nightmare scenario for you: we could have ended up with Visual Basic instead.

JavaScript isn't perfect, but what language is?


While I don't quite agree with the parent poster, half of your points involve a) best practices, which require programmer discipline, not parser smarts, b) a lint-checker, which again, is not as good as the parser itself forbidding something, or c) using certain libraries, which is not as portable and easy as direct language support.

While I agree that these issues are surmountable, I don't think you've rebutted the point that they are issues with the language itself.

That being said, no language is perfect and for being whipped up in a day, Javascript's pretty nice.


> Going from Pascal to C was like a breath of fresh air. Pascal is like a combination of everything I hate in all programming languages that I know and I'm glad that the world moved on. C is small, elegant and portable. It's a systems programming language that does what it's supposed to do.

I hate C with its lack of proper arrays handling and brain dead string manipulations that open the door to so many security exploits. Additionally the lack of modules/namespaces is a joke.

Mac/Turbo Pascal, Ada, Modula-2, Modula-3, Oberon(-2) are all examples of system programming languages done right.


C's array and string handling is extremely compact and powerful in many ways. It does, however, require at least some understanding of pointers. But once you understand how it all works, you can do significant string manipulation with relatively little effort, while still retaining exceptional runtime performance and minimal memory usage.

That said, if you're the kind of person who doesn't understand pointers, there are numerous libraries out there that'll provide a variety of functions for working with arrays and strings more easily. C itself doesn't prevent you from using these alternate approach, but rather it enables them.


The problem with C, for most application domains, is not that it requires pointer knowledge, but that it requires constant attention to this knowledge.

Combined with the fact that there's no warning when you've stepped outside the bounds of an array, I think makes it an extremely dangerous language to use for large scale programming.

Years back, a friend was maintaining a system collecting and processing scientific data, in C. Several papers have been published on this data over the course of many years. I remember the sinking feeling we both got - mixed with real fear - when he told me he had discovered an error where an array's bounds had been miscalculated. I don't even want to think about the possible ramifications of that...

For some application domains low level control is necessary of course, or for others the performance gain may make the tradeoff worth the cost.


The problem of people building on top of C is that C is not a language meant for applications. A system collecting and processing scientific data is too high-level for a language like C.

In fact no application should be written on top of a language that doesn't do garbage-collection by default, unless you've got the resources of big companies. Any language that doesn't do garbage-collection by default is dangerous for large scale programming.

On array bounds checking, you can't do such bounds checking at compile time, so you have to rely on runtime checks and metadata. As my highschool professor kept telling us, C is a medium-level programming language. It's optimized for moving bits around in memory, while the code remains reasonably platform-independent. Adding array-bounds checking by default doesn't make sense for a language like C, because it stays in the way of moving bits in memory.


> Adding array-bounds checking by default doesn't make sense for a language like C, because it stays in the way of moving bits in memory.

Any proper language for systems programming allows to turn off bounds checking if required-

This is the way it should be done, disabling the bounds checks on the few spots that require them being disabled, after profiling if they being on really affects the performance of the application.


I think we're in violent agreement. It's not the language it's the usage outside of where it's appropriate.


All the languages I mentioned have the same expressive power as C for pointer manipulation, when required to do so, while being a lot safer by default.


[deleted]


While they are both flawed, JavaScript is nowhere near as terrible as PHP.


> The popularity of programming languages is entirely a matter of historical accident. C flourished while Pascal languished because UNIX was written in C. Was there really a deep technical reason for preferring C to Pascal? No, not really. Indeed, for most of early history, Pascal had far superior tooling to C.

C and Pascal were developed almost in parallel.

Being the UNIX's systems language, it meant if you wanted to develop on UNIX C was the default option for most developers.

> How else do you explain the popularity of languages like Javascript and Objective-C?

There is no other option to target the browser, as JavaScript is the only language natively supported in browsers.

Objective-C is the only proper way to develop iOS applications, thus you need to use it if you really want to fully explore the platform.

All the languages you mentioned only became famous because developers were forced to use them commercially.

Back in the MS-DOS days I had zero need for C, Turbo Pascal was enough for everything (with a bit of Assembly).

But the commercial sucess of UNIX meant eventually I was forced into the realms of C. Luckily C++ was around the corner and provided an escape path for a saner language.

I still miss Pascal and the derivative language (Ada, Delphi, Modula-2, Modula-3, Oberon(-2), Active Oberon).


The version of Pascal I learned on (early 1980s) had the problems Kernighan refers to in this piece:

http://www.lysator.liu.se/c/bwk-on-pascal.html

Most awfully, "length of the array is part of the type of the array". This made some programming rather painful (strings and numerical programming, for example). Maybe other dialects of that time didn't have this problem, I don't know. I do know I found C to be a great relief.

Javascript is a good example of historical accident, on the other hand. It's got just enough extensibility that its flaws can be worked around. But its popularity is not due to its careful design.


I upvoted you for talking about historical accident, but regretted it when you started in on Javascript and Objective-C. Yes they have some questionable decisions but they are by no means objectively awful.


Given that Javascript was designed in 1994, not 1974, I think it rises past "questionable design decision" not to just use Scheme's lexical scoping rules (i.e. the only sensible lexical scoping rules). Although to be fair, scripting language designers keep getting this wrong for some reason.


Frankly, 'var' is fine most of the time. Keep writing small functions and all that. Besides, you have 'let' now.

I have much a bigger beef with the type system (or lack thereof), the array-and-hashes-as-one-datatype stupidity and the lack of a module system.


Yeah that's a problem. I thought you could get proper scoping by writing "var" in front of everything?


That's function scope, not block scope. You can use the "let" keyword to get block scope since Javascript 1.7.


It still took over a decade to get what should have been available from the very beginning. That's a pretty serious problem that repeats time and time again with JavaScript.


Oh, definitely. But I haven't found it overly problematic in practice. I'll point out that Python, a language much more appreciated than Javascript on HN, has no scope smaller than function scope. Which is mostly fine as long as you stick to small functions, which you should anyway.


Well you have missed the point that they chose C over Pascal to implement Unix. I don't think that was by accident.


To call Objective-C a shitty language is, oh I don't know, really really ignorant. It's one of the nicer languages in my opinion.


Yes, this. You have to learn all the abstractions in a language, and the advantage of "batteries included" languages are that the abstractions are (largely) the same in everyone's code, aiding maintainability and re-use. On the other hand, if you build everything yourself, it might be better optimized to your use case, but you'll be the only person who's easily able to use the abstractions.


The vast majority of commercially significant apps are built in relatively baroque and ugly languages like Java and C++.

I'm fairly certain there is vastly more code written in either FORTRAN or COBOL than Java and C++ combined. So I guess that makes FORTRAN and COBOL better than Java and C++ put together . . . ?


That's an interesting assertion. I can't readily find estimates about the number of lines of code in different languages. Nor could I find an estimate as to the number of programmers in the world over time, from which I hoped to make an extrapolation. Somebody could write and interesting report about this. Maybe my research wasn't deep enough. Anyway, how do you figure there's more FORTRAN and COBOL?


No. It means that language choice is less important than we like to think it is.


This argument comes up every time somebody suggest an unpopular tool, and I think it's inherently flawed. The core problem is that popularity does not imply--and does not even necessarily correlate with--quality! Just because something is in some real sense better does not mean people will adopt it.

Most people are not willing to learn something fundamentally new. They might be content going from something they know to something very similar, say Java to Python, but they will resist anything truly novel and different. And the others? They're mostly the ones already advocating Haskell or Lisp or Forth or what have you!

Also, people have some really odd reservations when switching to a new technology. They are often not willing to take any visible steps back: the new might be better in a whole bunch of non-trivial ways, but if it's obviously worse on some easily identifiable property, people will avoid it. A new language might have a long list of advantages, but if it has an inefficient default string type, or slightly wonky syntax, or a sub-optimal packaging tool or any other superficial but obvious shortcoming, people aren't willing to make the switch.

Another problem is that there are different kinds of productivity. There are "dense" productivity changes: if you have to write your own JSON library or deal with a broken build system, you'll be spending a contiguous amount of time on it. You'll have to devote maybe a whole day or even a week to getting around this problem. There is no way to miss this. On the other hand, if a language improves productivity in a "sparse" sort of way--say you spend 20% less time writing code and 30% less on debugging and maintenance--you won't notice quite as easily. And yet, over any reasonable time using the language, you'll come out far ahead even if you have to sink in days working around problems and libraries.

A particular--and particularly important--example of this is in learning. As I said above, one of the main reasons people resist new technology is that they don't want to learn. They're too busy to spend a whole week or even month picking up something new. And yet, learning time is essentially a constant expense. Productivity gains, on the other hand, are at least linear in how much you program. If a language makes your code simpler, the gains can even be super-linear (that is, you get more benefits as you write bigger programs). These will dominate any learning time as soon as you actually start using the new technology widely. And yet, since the amount of time spent learning is obvious and the ambient productivity gains aren't, people put a larger than warranted cost on the former.

Coincidentally, this does not only apply to programming languages. I've seen exactly the same sort of behavior in adopting any kind of new, non-trivial technology: Emacs, Vim, Git, Linux...etc.

In short, don't trust popularity. This probably makes me sound elitist (and, to be fair, I probably am), but it's just like music or literature: the popular stuff is usually not particularly good and the good stuff is usually not particularly popular.


Sorry but this argument comes up somebody suggests that unpopular tool is merely unpopular and not flawed.

Lisp is one of the oldest programming languages in existence, it's still taught in Universities and has been for over 40 years. Age and exposure and still not popular.

I'm not sure why you keep mentioning new technologies when the article is about old technologies and your examples (Emacs, Vim, Linux) are old technologies! The only example of a new technology, git, has seen an absolute rapid rise and completely changed the version control landscape in just a few years.

A new language is not going to have a long list of advantages -- it's going to a have different list of trade-offs. Because any language feature that is objectively good with no downside or trade-off has already been implemented in some popular language somewhere.

And if you can prove that you're doing something really different and really better but there's a strong learning curve people will learn it. Git is the perfect example.


While I on the whole agree with the sentiment that there are a lot of delusional people here thinking that functional is better even though it's constantly been the new fad and still hasn't caught on, this post is wrong and was proved wrong in the last 4 years.

Because any language feature that is objectively good with no downside or trade-off has already been implemented in some popular language somewhere.

But C# and now C++ and Java have all very recently and quite suddenly included lambdas and closures?

It took javascript, a hybrid language, to show language designers just how powerful those features can be. It took a practical application of the concepts in an almost OOP setting to allow people to understand just why they're so useful.


Not sure why you got downvoted there, I think your mostly right, or at least have a valid opinion. Maybe it's the word "delusional" :)


They were just wanting to be charitable and prove him correct; obviously only delusional people would downvote such a supreme comment!


Yeah, I didn't really mean it like that.

I think the entire time I've been on HN people have been saying everyone's just about to switch to functional programming. We've had a lot of advocacy for Haskell, Scala and F#, but no big switch. It's fairly obviously never going to happen at this point, but they still say it.

In the same time frame MVC has transformed web programming, with Django, Rails, Symfony & ASP.Net MVC all becoming a norm in web programming.

If functional programming really were that compelling, we'd have seen a similar switch by now. Instead what's happened is that all the main languages have adopted the best bits of functional programming and left the bits which make it hard to write large programs.


Lisp is a huge success, not because people have built "commercially significant applications" in it, but because it has expanded how generations of programmers think about programming.

I'm not defending Lisp specifically (in fact I don't like it very much.) It's that the notion that a language can be "better" or "flawed" in an unqualified or objective way really irks me. The viewpoint that the only reason anyone would ever program anything is to create production software is, in my frank opinion, an intellectually stunted one.


> Because any language feature that is objectively good with no downside or trade-off has already been implemented in some popular language somewhere.

This almost implies that there is no opportunity for anything new in programming language research, which I find laughably depressing, especially if you take "some popular language somewhere" to mean the popular industrial languages. It would be rather sad to think that Java (or C++, or C#, or Ruby, or Python, ...) represents the pinacle of programming language design, and we're stuck with it and its ilk from now on, forever, since there are no good language features left that are worth implementing.


It doesn't imply that. But I think the idea that programming language research always entails esoteric new, from scatch, languages with no tools is even more depressing. There has been a lot of research over thr last 5 decades and every year more of that research gets put to use. It wasn't that long ago that a cross platform language that gets dynamically compiled to native code and supports efficient garbage collection would be considered science fiction.

Progress is ongongoing but it's evolutionary not revolutionary - and that is a good thing.


I've been hearing exactly this argument since the early 2000's when I was myself a smug lisp weenie. The thing is though, that software is by and large a highly competitive business and any tool that offered the kinds of productivity gains people claim are possible from specific languages would enjoy fairly rapid adaptation. Witness the rise of Ruby & Rails for instance.

To quote John Carmack:

To the eternal chagrin of language designers, there are plenty of externalities that can overwhelm the benefits of a language, and game development has more than most fields.


While I'll agree that popularity is not everything, and that often people are stuck with overall subpar tools because those tools do one thing particularly well (php and application deployment comes immediately to mind), I don't think you can reasonably argue that programmers as a whole are so mired in institutional inertia that they can't move to things that are better for them within a relatively fast timeframe. The rapid adoptions of, for example: C, C++, Perl, Java, Python, and Ruby make it painfully obvious that that's not the case.


The difference between the popularity of Emacs, Vim, Git, Linux, and a programming language is that it doesn't matter if anyone else on the planet uses Vim Git, etc., they will work the same to me. But the difference in productivity from pre-made libraries for using Ruby (52,220 gems cut since July 2009) vs. scheme (http://planet.racket-lang.org/) is vast, especially when you consider that the power of the two languages is not that great.


> it doesn't matter if anyone else on the planet uses Vim Git, etc

How does that work? Unless you are creating and supporting Git, git isn't going to work if you are the only person using it. Even then, there will be no github. And I don't know about you, but I won't use Vim if I lose my plugins(directly correlated to how many people are using vim)


Why wouldn't git work? The vast majority of projects I use it for are ones I work on by myself. As far as vim, the only plugins I use are syntax ones and command T, which I could live without.


And why do the rich libraries and tooling end up being built for languages like Java or C++? Because they require collaboration, and these languages, by handicapping the creation of DSLs actually make collaboration easier, and, more importantly, they make the collaboration between people of different IQs, backgrounds and expertise levels easier.

...sometimes handicapping the smartest person, more or less on purpose, makes him better at working in a team (even a "virtual" team like the one encompassing library creators and library users). It's ugly but it works! Of course, the ideal would be to have a team of very smart people - but when you can't have that , the next best alternative is to use a language that artificially "dumbs down" the smartest guys to a level similar enough to that of the others to allow collaboration... sad, I know :|


I think the problem is really that nobody is being taught in school to constantly redefine their language to create an ideal representation of the problem, when that's what "ideal" code tends to involve. The comfort zone is what you know, and if what you know is Java...

However, I also think that we are making progress to go in the abstract direction, by dint of increasingly having polyglot systems and DSLs that are part of the standard libraries(e.g. regex and SQL access are expected everywhere now). If this trend accelerates we could have a hockey stick effect and before we know it, find ourselves programming almost entirely with tiny languages.


As a counterpoint to this article, I'd point to Yossi Kreinin's excellent "My History with Forth & Stack Machines" [1] which has been posted on HN before [2] and which discusses more extensively the particulars of Forth as a powerful toolbox of primitive operations, as well as the use of Forth in actual applications.

[1]: http://www.yosefk.com/blog/my-history-with-forth-stack-machi...

[2]: http://news.ycombinator.com/item?id=1680149


Am I the only one being put off by the CSS? I had to crank the display brightness to the max in order to read it without strain.

Edit: I agree a lot with the content though - For many complex problems, especially the kind where you can't just throw some standard software solution at it, it would be helpful to first develop a suitable DSL. From personal experience I'd say spend one third of your time budget into developing your domain specific toolset - even if your managers get nervous. If it works you will finish sooner than you expected and you have developed reusable tools and skills, if it doesn't you can still throw in a crunch and get it done. For me it has always worked though.


That's solarized dark. I actually just switched to it for my personal blog because I really like it. Apparently I'm not alone. I spend most of my day in a fullscreen terminal with a solarized dark colorscheme with AnonymousPro as my font.


Perhaps it is because I am in a dark room reading in bed with a MacBook Air set at 50% brightness, but I actually really liked the experience of reading that. Though I can absolutely understand where someone might not.


I'm in a bright[1] office with an rMBP. I'm imagining everyone with displays that give less brightness / contrast is going to have a bad time.

[1] It's a Japanese office. These people really like their cold bright neon. I think it has to with dark east asian eyes.


You might like Flux. It adjusts your display's color temperature depending on the time of day. No more blinding screens when you open your MacBook at night. :)

http://stereopsis.com/flux/


Using flux as well but that site was one of the most enjoyable sites to read that I've seen in couple years.

The colors aren't that beautiful but it's much less stressful for my eyes, and the highlight colors were spot on as well.

Thanks to emidln for informing that it's Solarized Dark, I've used the default Solarized but I've skipped the Dark one because of somewhat ugly colors.


I liked it too, but I almost went blind when I came back to HN.


I actually clicked on the comments to post how much I enjoyed the experience of reading the post (though I was going to put an "off-topic" disclaimer. To each his own, I guess, but I really enjoyed the reading experience of the theme.


For me a language is getting close to "powerful enough" when it does not get in the way of you as the programmer when you are developing abstractions of the form that seems most natural to you.

This can cause problems in some cases; when the language is stepping that far back to allow you the freedom then it is up to your to enforce your own constraints to avoid your abstractions becoming...too abstract (maybe).

Also in my opinion there is not one abstraction form that will appeal to every type of programmer. For me I like the freedom to manipulate and define symbolic forms with Common Lisp for others it might be the expressive type system of the Haskell. I don't think you can truly say one way is better than the other.

I think those languages and a few others are giving the tools of abstraction that will not get in the way of you as the programmer.


>enforce your own constraints to avoid your abstractions becoming...too abstract (maybe).

Too abstract isn't necessarily the problem. When I've stepped too far away from standard OO structure, the problem typically becomes organization. In OO you know where just about everything SHOULD be, at least. If you're writing your own rules, it's harder to be certain where new variables get set, or what members you should find in what structures.

It takes a certain level of discipline to not shoot yourself in the foot. Or at least to clean things up before they get too messy.


Depending on the implementation of the OO system in question it could provide you with relatively limited options for abstraction.

I can only really speak from a Common Lisp point of view and the problem there that I was alluding to is the fact that you are so free to produce your own forms of abstractions whether it be mini DSL macros, dynamic dispatch method combinations, data driven programming using first class functions etc. that if you are not careful you can easily obfuscate the real intention rather than making it clearer.


I use Lua, with ALLOWS you to use OO, but doesn't enforce it. To say the least. Lua does have first-class functions, including full closures, and the one Lua data structure is a "table", which can act like a full OO class with the right patterns applied, but which generally can be extended anywhere, each "object instance" getting its own unique extra fields attached.

Lua doesn't have macros (well, you can use a library to get an extremely powerful macro language if you want [1]), but the problem with going too far down the macro/DSL route (IMHO) is that the code becomes harder for people who didn't write it to understand. Or even if the code itself is easy to understand (by virtue of being a good DSL), it becomes harder to extend or modify, especially if the modification involves something not explicitly designed for by the original developer.

If you stick with the syntax of a known and well documented language, then the learning curve for new developers is easier. Though even then, the other issues you point out can obscure the code flow, so probably the only thing you can say is that complex code is complex. ;)

[1] http://metalua.luaforge.net/


This is true from an academic perspective, but not from a practical perspective. Typical crud apps don't have complex problem domains. They're just electronic filers with a search field. You do need some dsl's, for formulating views, tying those views to models, and tying those models to persistent storage. That's why most popular frameworks implement pseudo-dsl's for MVC and ORM. MVC and ORM can be solved in a generic way at the framework or even language level, and it could be argued that you shouldn't try to solve them in a project-specific way to ease maintenance. There is little value for many (or even most) projects in having a more powerfully abstractable language.


And maybe, just maybe, programming is more than "typical crud apps". A recurring bias here on HN is to believe that all programmers are web devs, stacking simple business logic on top of a database.

What you call "many (or even most) projects" is alien to a lot of programmers.


> Typical crud apps don't have complex problem domains

I beg your pardon, but I think you're forgetting that there are plenty of business apps that have very complex requirements due to legislation, a myriad of cross-cutting concerns, and audit requirements. Domain complexity is a very real thing, and it often necessitates architecture beyond cramming everything into this week's CRUD framework.


Lets ride with that idea. The problem is that the info about functional programming show how maginificient it is for math-alike problems but barely never for crud-alike problems. I can understand some of the genius of haskell, but can't see how it make a better django or ruby on rails.

CRUD's apps have the stigma of be "patetic simple projects". Are not! Building erp-like software is dam damm hard. Making a search engine? Where clever math solve it? Where well understood ideas of scalability are clear? NOTHING against the complexity of fly at the speed of a business, deadlines measured not in months, but HOURS. Underfunded, not enough staff, working on legacy code, and that is only the surface.

Building a language to truly be a MASTER of CRUDs apps is something I truly wish for (the most close thing was Foxpro, IMHO).

Despite my love for python, and what I know for my use of .net, ruby, obj-c, I don't think is something well solve yet


CRUD apps are lowest common denominators. They store data and then retrieve it again. Functional languages are no worse at that than imperative languages, and they're better at "math-alike" problems.

I'm not sure that you can come up with a language that is targeted towards CRUD apps, because the entire concept of programming is CRD. The problem is building domain specific abstractions on top of a system that just flips bits and tells you what they say; and by "the problem" I mean "our job."


Do you know foxpro? The zero-friction to manipulate tables (and back them, nobody worry about the OOP impedance mismatch) the 0-necessity of use a ORM, the complete tooling to make a full app (with report engine included), the possibility to code store procedures in full foxpro (before become cool the idea of embed full languages inside db engines) and a lot of other things.

The weak link was the fragile database engine, but more important, the kill of the tool by MS.

The mix of a language, db engine, gui toolkit, and other things (similar, but far better than Acces) make it a tool truly made for this area of CRUD.

Plus, you could work in their REPL, and do commands as BROWSE and see the full data in a editable grid to use as you wish. I can't describe in words how functional the whole thing is. Is similar in the power of smalltalk, but instead of have a full reloadable code you have the full data at your hands.

Check http://books.google.com.co/books?id=k8fN2KMF1j4C&pg=PA54... for a idea of the tool.

A modern version of foxpro is something I wish for, but without the coupled GUI toolkit and a better syntax


Obligatory reference to Ometa [1], a simple almost-PEG parser embedded in a Scheme that allows the VPRI [2] (headed by Alan Kay of Smalltalk fame) to quickly build layers upon layers of DSL's. Their goal:

  to create a model of an entire personal computer system
  that is extremely compact (under 20,000 lines of code) [3]
[1] http://tinlizzie.org/ometa/

[2] http://www.vpri.org/index.html

[3] http://www.vpri.org/html/work/ifnct.htm


>>One exceptionally good resource is the Guy Steel’s keynote from OOPSLA’98 conference about Growing a Language.

If you want to know one practical language today that embodies this philosophy and is yet main stream today, its Perl.

Perl 6 takes it further. Infact Perl 6 by design is supposed to be such. Here is comment Larry Wall himself(http://www.perlmonks.org/?node_id=836573). Thought the whole thread was a troll thread. The first time I encountered this was like 6 months back while searching something related to Perl 6 on the internet. And it stuck me that how amazing this concept was.

I then went back and watched some of his keynotes. I will recommend everybody to have a look at it.

Perl 6 is definitely going to be a big game changer in this place. And unlike Lisp its a very practical language for your everyday programmer.


You are comparing a specific language (Perl 6) with a familiy of languages (Lisp). Some members of the Lisp family are actually very practical, take for example Common Lisp.


Perl 7 is going to be another Perl 5 stable release, just wait and see. No one is going to use Perl 6.

See http://news.ycombinator.com/item?id=5179513


Then Perl and Scheme are more similar than I thought!

See http://scheme-reports.org/2009/working-group-1-charter.html


I had high hopes for perl 6. When the hoopla about versioning perl 5 came about I "remembered" perl 6 again and went on to read about its current state of development. My conclusion: it's a pharaonic project and these IT projects die more often than not :-( I'm afraid perl 6 is doomed...


I think this roughly reflects my philosophy in languages as well. You basically have two options: you can bend your problem to your language or your language to your problem. The former is epitomized by Java and Python--all Java and Python code looks the same, regardless of what it's actually doing. The latter is the domain of Lisp and Haskell and Forth and so on.

I'm unabashedly in favor of the latter.

Ultimately, it's easier to reason about some domain in that domain. I would love my code for any particular project to look very similar to what I would write on a whiteboard when solving the problem. If I'm solving something inherently mathematical, I want the code to resemble math notation. If I'm writing some parsing code, I want it to resemble a grammar. If I'm writing an interpreter, I want it to resemble the operational semantics of the language. This fits into more "practical" domains as well: if I'm writing a web app, I want the routing to look like, well, routing and the database code to look as close to the database structure as possible.

Any details in the implementation rather than the domain should be separated from the domain logic. If you're trying to make the domain logic mirror the domain itself, this is essentially inevitable. This makes your code more general and easier to debug. You can redirect your domain logic to a different backend, and it's much easier to figure out whether you have a problem in the implementation or the logic.

More generally, such a strict separation of concerns is extremely important. The difficulty of a problem increases exponentially with its complexity: all the parts affect each other. If you manage to neatly split a problem like this in two, each sub-problem will be less than half as difficult as the main problem. So splitting away the high-level logic from the IO and the memory management and the other implementation details is inherently valuable. In fact, this is valuable even if the sum of both sub-problems is a bit greater than the original problem.

The malleable language approach gives you a very natural way to split your problems up: one part is implementing the DSL, modifying the host language or writing a library--creating abstractions. The other part is actually using these abstractions. I find this sort of split to not only be very useful but also very natural; it mirrors how I usually think about things.

One very interesting recent development taking this idea to the extreme is OMeta. There your program is essentially a tower of DSLs, with the base language just being a simple way to specify the DSLs. This idea really is extreme: for example, the code for implementing TCP is actually the ASCII art diagrams describing it! [1]

Of course, all the usual suspects like Lisp and Haskell are also worth looking at. Embedded DSLs in these langauges can go quite far, and they have some of their own advantages to consider.

[1]: http://www.moserware.com/2008/04/towards-moores-law-software...


The problem with writing a DSL for every problem (and every programmer writing different DSLs) is that you end up with all these different languages with different vocabularies and different semantics. It's like a million different variations of Esperanto.

Using a programming language is like speaking in common tongue to every other programmer who knows that language. And most languages are so similar that most programmers can communicate even between languages.

Having code that looks the same, regardless of what it's actually doing, allows everybody to be able to read it.


>Using a programming language is like speaking in common tongue to every other programmer who knows that language

As much as I love DSLs, I Couldn't agree more.

>The problem with writing a DSL for every problem (and every programmer writing different DSLs) is that you end up with all these different languages

When writing in a DSL of my design, it is amazing. It feels like the program models exactly how I am thinking about it.

When modifying/maintaining a DSL made by someone other than me, often this can be horrifying, it feels as though this models my thoughts so poorly (and it does, it was designed to model their thoughts).


Programmers already know the DSL for every problem: English (or their native tongue). If they're writing programs for a specific domain without knowing the language of the domain, they have problems that won't be solved by using programming terms they already know.

How many people have learned what MVC means from Rails? Or the term "routing" in the context of a webapp?

One caveat I'll give: the DSL must be well-conceived. Writing a DSL that is dissimilar to both the underlying language and the domain vernacular is obviously horrible.


I couldn't agree more. As much as I like languages that facilitate DSLs, this is an unrealistic view of software development.

To add to what you wrote, you have to also consider that all developers in one team have to agree on a common DSL beforehand, which is an investment of time in itself, and the fact that if there is an unusually big change in requirements mid-development (which, let's be honest, is not a rare event), you might have to go back and change your "pyramid of DSLs" to accommodate it, maintain compatibility and update all your colleagues on the new languages.


How is this any different than a team building a set of APIs?


The syntax required to use the API will be instantly accessible to new developers. Many programming languages provide semantic guidelines for APIs as well. So knowing the language an API is programmed in gives you a huge advantage in beginning to understand it.

With a DSL, all that goes out the window. The syntax of the language is potentially unknown to you. The semantics are an even deeper mystery. You're in terra incognito, trying to understand the mental process of the developer of the DSL before you can even understand what the code written in the DSL intends to do.

The primary goal of programming languages is not to describe how to perform a complex task to a computer. It is to describe how to perform a complex task to other humans in a form that computer can also execute.


I'm not sure I buy the whole syntax argument. I programmed in Forth for a while and it seemed like it was much easier to build up an understandable vocabulary than some of the C++ / Smalltalk class hierarchies I have seen. REBOL has dialects that work fairly well. I cannot help but think that DSLs are refinement of APIs the same way C is a refinement of Assembler.

When I talk to someone about what happens in a system it seems like a DSL flows from that conversation.

"It is to describe how to perform a complex task to other humans in a form that computer can also execute."

I think a DSL does that much better than a set of API calls, particularly when you involve non-developers. That is the point of a DSL.


I see APIs as a less extreme version of DSLs (in that they share a common language, while building a DSL frees you from almost any restriction), but I honestly don't have enough experience building APIs or DSLs to give you a satisfying answer. Sorry.


There's very little difference between a DSL and an API. You already know dozens of APIs. Why not DSLs?


That has been bugging a long time. Exactly what is the difference?

And statements like (paraphrasing the quote in the article)

> You shouldn't build an app in Forth. You should build a language to model your problem domain in forth and use it to build the app.

They just doesn't make sense to me. Isn't that the very definition of how to program?


For a lot of languages, I agree, because there's no way to create a DSL as part of the language itself. Think of the difference as one of direction; are you fitting the domain into the language, or extending the language to fit the domain? Most languages are either general-purpose (C, Java) or their domain is fixed upon creation (SQL, Mathematica).

In a Lisp, you can use macros to accomplish things that go way beyond defining a domain-specific library / API. A typical library is restricted to the capabilities of the language, but a DSL in Lisp can extend the syntax itself.

Want to alter the reader so you have a way to express normal code as remote computations? Go for it. Feel like adding new math operators to use matrices instead of scalars? Knock yourself out.

Common Lisp itself did exactly this when object-oriented programming came along; using nothing but macros and closures, the Common Lisp Object System (CLOS) was built.


The difference between DSL and API is the difference between language and library. What is that difference? Has it ever been successfully defined?


"Welcome to our team, here's the spec for our project, and here's the spec for our DSL. See you in a month."


"and here is the spec for our in-house framework".

"and here is the 1M LOC codebase, which was carefully crafted to avoid using any of the complicated frameworks; all you gotta know is the C standard library".


Spec? For an in-house framework? Luxury!


Did I say Spec? I meant the doxygen output of hundreds of classes with sparse one-line doccumments.

Plus one outdated sequence diagram.


One-line comment? What are you, a writer-librarian?


It can be learned. Observe:

       //! \brief Transport diagnostic wrapper
       class TransportDiagnosticWrapper {
(Actually, meaningful one-line comments are a lyric discipline of their own, related to haikus.)


You joke, but a tool I have seen lauded for being really "smart" at automatically generating doxygen comments does exactly that. I guess it at least saves the time of a human doing it by hand . . .


Actually this is how it was when I started at my current position.

We are doing web-testing in clojure, and in past year our little framework evolved into a DSL of a sorts.

On one hand it is great, we a have consturct like (with-client [name connection & body]), that behind the scenes sets up a virtual machine, registers it as a client of our web-service and destroys it after commands in body are done. I can't imagine we would be doing setup and teardown by hand.

On the other hand it took me a month to get into it (two weeks learning the language, two weeks messing around with the framework) ... and there are still lots of things only our lead-tester knows how to fix, after our devs decide to change something we were relying on for testing.


>two weeks learning the language

I am curious, did you already know the JVM? When I learned Clojure, I was also learning the JVM, and for me the JVM was much harder than Clojure. So I would say it took me 6 months to learn Clojure, but much of that time was the time it took to learn about the JVM and all the related Java weirdness.

Either way, 2 weeks is very impressive.


JVM? I knew Java, but most of what I did in java was using autocomplete in Netbeans and cursing Maven, when it didn't work.

What really helped me that, we had "programing paradigm" course in college, where I learned haskell for 3 weeks, what was enough that anonymous functions, map, reduce and using lists for everything didn't feel new.


If you start a sentence with "Welcome to our team" and end it with "See you in a month." whatever you put in between, your company is doomed, and your team is highly dysfunctional.


Abstraction layers are great... if you don't end up spending 90% of your time climbing the stairs that connect them!

Take SQL, a nice DSL for working with relational databases. Yet people ended up writing ORMs (quite ugly creatures imho - something between an elevator connecting two layers, and as in real life elevators are a mess when too many people want to use them at the same time, and an intermediate pseudo-level, but they are useful) because the pain of climbing up and down the ladder connecting the "relational level" with the "OO layer". "Language layers" or DSLs are a powerful but dangerous abstraction because when they start using them people can't stop, they just make more and more layers like crazy! (The only other abstraction so powerful and so dangerous I can think of are objects - it's amazing how many got them completely wrong so many times in languages like C++ or Java... besides Ruby and Smalltalk I can't think of anything that got them "mostly right"... and Ruby seems to be the most DSL-able language in use too, though this is why I'd take Python over it any day - smart programmers tend to make everything like a freakin DSL in Ruby and and your mind ends up running up and down the ladders until you go insane when reading code).


Your example is actually a pet peeve of mine. SQL is not at all a nice DSL for working with relational databases programmatically. When you think of it as an API, it exposes one method that takes one gigantic string parameter. API design doesn't get much worse than that.

A sane relational database API would let you compose and extend arbitrary queries, which has profound implications for the way you can write your program and abstract away repeated patterns.

[Arel](https://github.com/nkallen/arel) has the right idea in terms of API, but a poor implementation (lots of things are not truly composable where they should be). [ScalaQuery](http://www.scalaquery.org/) purports to have the right idea, but I haven't tried it, and it's very Scala-centric.


The other problem with SQL is the "standard problem": you have so may of them to choose for that when you need to support several databases, the additional layer of complexity and black magic of an ORM can be worth it.


I can't speak for everyone, but the reason I choose to use an ORM over writing all my SQL by hand is time and maintenance cost. I can write one SQL query to perform CRUD operations for a model almost as quickly as I could use a ORM. The problem is that it produces 10 times as much code to read and interprete. If I write the SQL code for another model, 90% of that new code is exactly the same as the last. I am bleeding duplication everywhere.


What is completely wrong about the way C++ does "objects?"

I've recently been digging hard-core into C++ after making such disparaging comments for years and have been rather surprised to see that it's not as terrible as I once thought -- It's actually quite good.

I agree however with your point about dangerous abstractions. DSLs when implemented with careful, thoughtful restraint are rather powerful. SQL being a really good example. There's nothing stopping anyone from writing their database queries by directly sifting through and filtering datums from the domain using the primitive functions and operations of their programming language of choice... but it's clear that SQL is much more powerful and lets us express such problems in a succinct way.

But just like programming languages, once introduced, a DSL is likely to be misunderstood and abused. The problem I believe lies not with the technique but with the users. A double-edged sword is a fine tool but will likely maim an inexperienced user. Which will probably lead them to writing DoubleEdgeSwordWrapperMachineGun classes and then someone else to write a DSL on top of that...


I would add Lua to the latter group (a language used as a tool to build the language you need).

Not NEARLY as cool of an implementation as implementing TCP using ASCII art, but there's a command-line option parser that parses the help text to determine the command line options. [1] For example:

    -- scale.lua
      require 'lapp'
      local args = lapp [[
      Does some calculations
        -o,--offset (default 0.0)  Offset to add to scaled number
        -s,--scale  (number)  Scaling factor
         <number> (number )  Number to be scaled
      ]]

      print(args.offset + args.scale * args.number)
[1] http://lua-users.org/wiki/LappFramework


Threre's also LPeg [1], which I find much nicer than lex and yacc (or flex and bison) for writing parsers. I just finished up a quick hack to pull out a single file from a git repository into its own repository, and the parsing of the "git fast-export" output is 70 lines of Lua/LPeg. It may sound like quite a bit, but it's a near dump of the BNF you find on the man page [2]. From there, it was simple enough to grovel through the in-memory representation of the git repository and output only what I needed.

[1] http://www.inf.puc-rio.br/~roberto/lpeg/ [3]

[2] http://kernel.org/pub/software/scm/git/docs/git-fast-import....

[3] I actually use the re module [4] as I find the results more readable, and only use the LPeg module when I need something that the re module can't provide.

[4] http://www.inf.puc-rio.br/~roberto/lpeg/re.html


That can be done in basically any dynamic language: see http://docopt.org/


That's a pretty cool project. Sure you CAN do it in any language (as another commenter points out, you can do it in C too). Anything that's turing-complete can do it. You could also use C to parse ASCII art and have that create a TCP implementation. It's not the natural way you'd do it in that language, though.

lapp has been around for four years; docopt is (looking at git commits) less than a year old. It strikes me as a natural way to do things in Lua, which is why (I suspect) it was done in Lua earlier.


It can even be done in static languages with an intermediate compilation step.


My problem with Lua DSLs is the verbose function syntax (`function foo() return end`). I'm thinking about experimenting more with Metalua to see if it helps in that regard.

One thing that positively impressed me were coroutines though. You can do some very interesting DSL things with it.


Don't you think that with lisp you also need to bend your problem to the particular of lisp? I speak fluently three human languages, and I also have to bend to them when I want to express myself. The more proficient I am the less the bending is noticed, but it is still there.

I'd say it is the same with all good computer languages. You can express whatever you want in python and lisp, but following different grammar.


Totally agree with division into modules lowering complexity (provided it narrows inter-module communication). An interesting idea, that a DSL is way to do this.

But designing a language is hard, including a DSL. Even for experts. e.g. Codd had two attempts at a language for relational algebra, but "mere mortals" couldn't use them. Mathematicians create new notations at the drop of a hat, giving their own definitions to common terms willy-nilly, but this can be an impediment to collaboration.

OTOH, there's an advantage of having many different attempts at a language, inspiring and cross-fertilising each other, til eventually settling on a near-optimal design (if everyone designs their own DSL, this doesn't happen). In practice, you get one group of people experimenting like crazy - academics, visionaries, hackers - and a huge majority that waits to see if something emerges that they can standardise on. Meanwhile, experiments continue. (This is basically the "technology adoption lifecycle".)

The ruby community develops new DSLs very quickly, in the form of frameworks. I get the impression the rapidity of change is challenging, but the improvements are worth it.


I am studying engineering, which is all about OOP. HN is making me so uncontrollably curious about the functional stuff!


Keep in mind that functional isn't better or worse than OOP. What you see on HN is merely a reaction to the trend in the past decade (a trend still going on today) to teach OOP as the only de facto paradigm to use.

If Functional Programming was mainstream, everyone here would be raving about OOP and its benefits.

Ultimately what you want is to borrow the good ideas of different paradigms to create elegant and practical code.


Keep in mind that functional isn't better or worse than OOP. What you see on HN is merely a reaction to the trend in the past decade (a trend still going on today) to teach OOP as the only de facto paradigm to use.

To put it another way, OOP is a pretty nice hammer, but so many people have been using it to bash screws in (including ridiculous cases like concrete!) that others have reacted by using screwdrivers for everything (including nails). Pick the right tool for the job.


> Keep in mind that functional isn't better or worse than OOP.

Ah yes, the ol "it's all relative" argument.


if you're at all interested, I can't recommend (http://learnyouahaskell.com/) highly enough.


Would you recommend that or pg's On Lisp?


Learn You a Haskell. I found it easier to read and often thats what you need when your mind is being warped by the content.


Try it. Its not the silver bullet it sounds like when you listen to people rave about it, but it is worth learning.


Put it in your toolbox. You can apply functional concepts in basically any high-level language.


I think you're wrong about Python. It has very powerful abstraction and runtime reflection mechanisms that enable DSL-like frameworks, such as Flask or Django, to be implemented in it.


Evidence to the contrary:

* Heterogeneity of object abstractions: in order of complexity, you use namedtuples, classes, and metaclasses. In Lisp you'd use the same kind of abstraction across complexity levels.

* Similarly, there are lambdas, plain functions, and bound methods, all for pretty much the same thing.

* Which do you use when? How hard is it to change? http://lukeplant.me.uk/blog/posts/djangos-cbvs-were-a-mistak...

* The way so many configuration variables in Django are strings, that just "coincidentally" point to a class, module, or function.

I've been using Python for 10 years, and Django since pre-1.0. My current startup is built on it. The parts start to show.


Actually, what I like about Python frameworks is that they don't look like DSLs, they manage to be clean, explicit, casual and understandable Python code, even if you may consider them a DSL at a conceptual level.


LOL.

You can bend your statement to your language, or your language to your statement. All essays look the same, regardless of the statement being made. English, Chinese, Persian.


[deleted]


I think you misread that.

> you can bend your problem to your language

>The former is epitomized by Java and Python--all Java and Python code looks the same, regardless of what it's actually doing.

He was saying that python and java don't bend as languages towards the domain, that is that they bend the problem towards them.

>or your language to your problem.

>The latter is the domain of Lisp and Haskell and Forth and so on.

And that Lisp, Haskell and Forth are the languages that bend.


You're misreading. OP is saying that in Java and Python you must bend your problem to the language. So you both agree.


It's hard to argue with this article - it's self evident that languages lock us into abstractions that aren't powerful enough to comfortably support the increasingly ambitious applications of code.

Object oriented programming was a big step because it's so heavily modelled on hierarchies - since found to be one of the most important principles in human reasoning.

I'm no expert in the area, but I'd love to hear ideas on other human reasoning principles or models that a language could be influenced by (particularly if you know about neuroscience or AI).


Agreed, to a point. What kind of abstractions are sufficient, or useful? What happens when they leak? What happens when you build the wrong abstraction, and have to go back and build something slightly different from the ground up? What about when your abstraction works, but is so far removed from the actual execution environment that it's so slow as to be unusable?

Yes, abstraction is a wonderful, useful thing that lets us think and do all sorts of wonderful, useful things - our high level programming languages and networking protocols are abstractions over lower level programming languages and networking protocols (repeat a few times) which are abstractions over the physical hardware they run on, and that's great. But abstractions are never perfect, and what you don't know about the lower levels of the stack will bite you, which is a good argument for using as few and as thin abstractions as you can to get your thing done (reasonably ... I'm not about to write this comment in binary ASCII, say).


He mentions the point when the abstractions in the language are no longer sufficient. It seemed to me that this was more an argument in favor of using languages that are easily extended rather than languages full of abstraction with few options for extension. If you use a language like the latter, you'll still eventually reach a point where the abstractions aren't sufficient, but if you use a language like the former, though you'll reach that point sooner, you'll be better equipped to handle it.

edit: word use


Maybe its just that I don't have enough experience, but I've never met and can't think of any problem that may made think that I need to write a programming language.

Most languages I know allows me to form some type of class/data structures for my need. What kind of problems are OP referring to? some kind of extensive math/scientific problem?


This is where Common Lisp comes in: through macros you can expand the language to include the abstractions your application needs. PG talks about this extensively. The issue is that doing this requires a deep understanding of the language, and many programmers are not able to grasp what the CL toolbox gives them.


overall great article, particularly this exceptionally elegant turn of phrase -

'It is known since the early ages of programming, most elegant programs are built as layers of "languages" or "dictionaries". These are represented by abstractions in a program in order to facilitate expressing the ideas in a way that is more related to the domain.'

Overall, I like to draw a line between canonical/ontological representations of models and representations of models designed for calculation. The ontological representation often times exists in a relational database or in an explicit OWL/RDF ontology, but may also exist only ideally, where the storage and calculation representations of the model are each customized for their particular use, especially at large scale.

the following concept, I almost agree with...

'it is our job to express the abstractions of the domain for the best economy of the expression'

As programmers (vs ontologists) I would say that our job is first to express the abstractions such that they enable efficient, maintainable computation that hopefully is expressing those abstractions economically. I feel like programming languages don't lack power, they just make it quite complicated to express complex abstractions.


Isn't this really describing what it means to write a program? Assembler is really just mov, add, branch, etc.. but some really complex and useful programs have been written in assembler. Or am I missing something? I had to squint to read the text.


"One simple example from everyday life: Imagine that you don’t have in your vocabulary notion of a sum. Therefore, if you would like to tell someone to do a sum of some things you would always have to express the process of summation in details instead of just telling and then you sum these numbers."

If you can encapsulate an undefined idea into one stream of commands, can't you just create a function to do it? What does that say about programming languages? Maybe I missed the point.


I have worked at places that used the same language but had totally different custom business logic class libraries and practices. Learning a new DSL probably would have been easier since the <Biz Speak> => <Code> step would have been easier.


Could we say, DSLs are the new frameworks?

For example for a web-dev in Java you'd use a MVC framework.

In Clojure you get a query DSL to do your model, templating DSL to create your view and routing DSL to specify your controler.


Low-contrast typography is not powerful enough.


And yea Java is popular and doesn't really abstract much. I guess with interfaces, it is kind of a rough abstraction.


I agree with the general approach, but isn't this just bottom-up design?


So this essentially sounds like "You gotta work at it to get it done."


Have you tried Lisp?


Check Jetbrains MPS




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: