Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Algebraic types? Dependent types? You'll never see them. They're too ... research-y. They stink of academe, which is: they stink of uselessness-to-industry.

One may think that because closures are finally entering the mainstream (after what, 5 decades?), we have hope for those things to come as well.

But then I saw Swift. Built-in support for an Option type, so one can avoid null pointer exceptions. At the same time, this languages manages to recognize the extreme usefulness of algebraic data types, without using them in their full generality. Like, why bother with a generic feature when we can settle for an ad-hoc one?

I'd give much to know what went so deeply wrong in our industry that we keep making such basic mistakes.



> But then I saw Swift. Built-in support for an Option type, so one can avoid null pointer exceptions. At the same time, this languages manages to recognize the extreme usefulness of algebraic data types, without using them in their full generality. Like, why bother with a generic feature when we can settle for an ad-hoc one?

Swift has the syntax to define arbitrary algebraic datatypes, even if it doesn't yet work in the beta versions of the compiler.


It seems to be likely that 1.0 will hit before this really holds true. In Beta 6 I'm told you can easily crash the compiler with recursive generic ADTs and most have to travel "through" some heap type to compile at all.

Furthermore, Swift doesn't support enough laziness/deferral/coalgebraic formulation to have, say, an infinite stream type without breaking GCD. These will probably be fixed in time, but Swift's ADT support is still pretty experimental to say the least.


Wow, where did you get Beta 6? I only have 5.


This is based off comments someone else had on trying to compile some of my experimental Swift modules.


> I'd give much to know what went so deeply wrong in our industry that we keep making such basic mistakes.

I don't think we can all agree on what counts as progress. Some saw Exceptions as the advance in error handling we need while Go reverts back to error codes. People still think Go is superior for different reasons. I think both suck and prefer conditions and restarts as in Common Lisp.

It is rather difficult to build a programming language from a set of axioms we can all agree on.


>Some saw Exceptions as the advance in error handling we need while Go reverts back to error codes.

Just a note that Go generally uses strings for error handling, not error codes. This avoids the need to look up the meaning of each error codes in a table somewhere.


Testing strings for errors is another example of a left turn in Go's design. Specially in a time of localized applications or OS that change messages between versions.


How are errors typically localised?


Not.


> I'd give much to know what went so deeply wrong in our industry that we keep making such basic mistakes.

The vast bulk of the industry is much more practical than theoretical. They don't care about your theory about how languages ought to be built. They care about solving the problems that are actually hindering programmers who are trying to write programs.

"But", I hear you say, "null pointer exceptions are one of those problems!" True. "And algebraic types can fix that!" Also true. But here's the thing: (Almost) Nobody cares. Nobody thinks that algebraic types are a price worth paying to fix null pointers.

Do not automatically assume that you are right, and that everybody else is too stupid to see it. Instead, try to expand your mind far enough to see that they may have a better grasp of the trade-offs that confront them than you do. They think your solution doesn't work in their world. Bemoaning their stupidity is the lazy way out. Instead, try to find out why they think that.

I think the problem is not with the industry. The problem is with your expectations of the industry.


In other words, the industry is thinking short term. Often very short term. They only see what's in front of them. Then next problem to solve, the next developer to hire, the next library to use…

So, the industry makes this analysis: yes, I could spend a few days learning about sum types, but it won't save me nearly as much time in avoided null pointer exceptions over the next month. So, no, it costs too much.

I guess we just have to live with this systemic irrationality. I guess long term thinking is just too much to expect. I guess decades old scientific results are too bleeding edge to risk employing them.

I can only think: not fast enough!


No, you either don't understand what I said, or you're trying to make it say what you want. You're not at all saying what I said in other words.

You think the industry is too short-sighted to know what's good for it. I'm saying that you're too narrow-minded to know what's good for the industry.

You think you know better than the industry what the industry ought to be like. I think you're wrong. I think the people in the trenches know better than you how to solve the problems they face. A choice can be different than yours without being stupid or short-sighted.


Swift has discriminated unions, and IIRC Optional is one example. It's defined as something like:

  enum Optional<T> {
    case None
    case Some(T)
  }


> closures are finally entering the mainstream (after what, 5 decades?)

8 decades, to be precise. They predate electronic computing - Church published his original paper in 1932.

It still amazes me that it's only in the past few years that widely-used languages have begun adopting them.


Elements of the idea were in Church's work, but I think it wasn't until the 1960s that anyone thought of closing a lambda by using the lexical environment to bind its free variables. And it wasn't until the 1970s that Sussman and Steele really drove the idea home with Scheme.

Before then even LISP didn't have closures. It was based on Church's ideas, and it did need to deal with figuring out how to bind a lambda's variables in order to evaluate it. But it was a dynamically scoped language, so capturing free variables from an anonymous function's lexical scope (which is what a closure does) didn't really make sense for it. Instead it just used the execution context to bind variables.


I don't think everyone knew how to implement lambada properly, but the calculus was clear enough. Hell, the original calculus was both lexical and substructural!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: