Hacker News new | past | comments | ask | show | jobs | submit login

But why the hell get into all that? We've got a job to do.

Your comment here is a great example of why people don't bother giving Haskell the time of day. I've already got business problems and performance problems, why give myself type system problems too? You're talking about adding on all these layers of complexity and abstraction, and the benefit is more "pureness". What do I care about pureness? I'm writing business code, or unix code, it's not going to be pure either way.

You'll claim that the type system makes all of the business problems just go away magically because your type system has reached a skynet level of self-awareness, but we both know you're gonna be debugging the same crap at the end of the day, except now you have 12 different monads, type constraints and a homegrown DSL in between you and the problem.

I'd prefer to work with a simpler environment, and it doesn't make me "too dumb to understand haskell". It just makes me "more productive than if I were working in haskell".




> why give myself type system problems too?

Exactly! Why use Go and have type system problems? Chase nil bugs in the middle of the night, when my type system could have caught them all for virtually no cost at all?

Why use Go and have type system problems like lack of sum types and pattern matching, having to waste my time emulating them with enumerated tags or clunky type switches?

What you're calling "layers of complexity and abstraction" are just "layers of abstraction" -- Haskell code to solve a problem tends to be simpler than Go code to solve the same problem. By simplicity, I'm talking about mathematical simplicity here. Not ease of learning. Simplicity is hard. But it pays.

I don't claim that the type system makes all problems go away magically, but it helps catch a far greater chunk of the errors.

> we both know you're gonna be debugging the same crap at the end of the day

Actually, no. If you've actually used Haskell, you'd know that debugging runtime problems is a much much more rare phenomenon. It happens, but it's pretty rare.

I don't ever debug null problems. I almost never have to debug any crashes whatsoever. I don't debug aliasing bugs. The vast majority of the sources of bugs in other languages do go away.

> I'd prefer to work with a simpler environment, and it doesn't make me "too dumb to understand haskell"

Who claimed you're "too dumb to understand Haskell"? If you're smart enough to write working Go programs, you're most likely smart enough to learn Haskell. But learning Haskell means learning a whole bunch of new useful techniques for reliable programming, and that isn't easy.

People who come to learn Haskell and expect it to be a new front on the same concepts they already know (e.g: like Go is) are surprised by how difficult it is -- because it isn't just a new front. There are a whole set of new concepts to learn. This set isn't really larger than the set of concepts you already know from imperative programming, but the overlap is small, and you forget just how involved what you already know is.


Hey, this is a pretty late response but regarding:

"I don't ever debug null problems. I almost never have to debug any crashes whatsoever. I don't debug aliasing bugs. The vast majority of the sources of bugs in other languages do go away."

I think this is the red herring at the heart of the problem. Those bugs really aren't a big deal, they happen rarely once you're proficient and they are quickly solved on the rare occasion when they do happen.

I'm talking about logic bugs, the kind that your compiler isn't going to find, or even that a "sufficiently smart compiler" couldn't find because it's a misunderstanding in the specification that you have to bring back to the product owner for clarification. Or bugs that occur when 2 different services on different machines are treating each other's invariants poorly. Those are the bugs I spend time on.

I haven't spent any time at all with Haskell, really, but it seems like a poor trade off to have to learn a bunch and engineer things in a way that's more difficult in order to prevent the easiest bugs.


> I think this is the red herring at the heart of the problem. Those bugs really aren't a big deal, they happen rarely once you're proficient and they are quickly solved on the rare occasion when they do happen.

This is simply not true. I don't only work in Haskell. I also work with many colleagues on C and on Python.

Virtually every bug in C or Python that we encounter, including ones we have to spend a significant amount of time debugging is a bug that cannot happen in the presence of Haskell's type system.

> I'm talking about logic bugs, the kind that your compiler isn't going to find, or even that a "sufficiently smart compiler" couldn't find because it's a misunderstanding in the specification that you have to bring back to the product owner for clarification. Or bugs that occur when 2 different services on different machines are treating each other's invariants poorly. Those are the bugs I spend time on.

If you had experience with advanced type systems -- your claims here would carry more weight. People who don't know advanced type systems tend to massively understate their assurance power. For example 2 different communicating services might use "session types" to verify that their protocol guarantees invariants. Or maybe the only type-checked programs are ones forced to reject invalid inputs that violate invariants.

> I haven't spent any time at all with Haskell, really, but it seems like a poor trade off to have to learn a bunch and engineer things in a way that's more difficult in order to prevent the easiest bugs.

They aren't the "easiest bugs" at all.

For example, consider implementing a Red Black Tree.

In Go, imagine you had a bug where you rotate the tree incorrectly, such that you end up with a tree of the wrong depth -- surely you would have considered this a "logic" bug. One of the harder bugs that you wouldn't expect to catch with a mere type system.

In Haskell, I can write this (lines 27-37):

https://github.com/yairchu/red-black-tree/blob/master/RedBla...

to specify my red black tree, with type-level enforcement of all of the invariants of the tree.

With just these 10 lines, I get a compile-time guarantee that the ~120 lines implementing the tree operations will never violate the tree's invariants.

This logic bug simply cannot happen.

Learning a bunch of techniques is a one-time investment. After that, you will build better software for the rest of your career. Debugging far fewer bugs for the rest of your career. How could you possibly reject this trade-off, unless you expect a very short programming career?


I meant less interesting logic bugs, like "Oh we never considered the intersection of these 3 different business use cases".

I could see a couple ways where the type system could be more powerful than unit tests, but only to the extent that your unit tests didn't cover some obvious cases to begin with. Why not just write unit tests?

As for how I could possibly reject the trade-off... I mean, nobody's gonna hire me to code Haskell and my side projects are too systemy and not lispy enough to even consider it.

Thanks for the code sample though, I plan on looking at this more later tonight and getting a feel for it (barely glanced just now).


> Oh we never considered the intersection of these 3 different business use cases

Take a look at the history of any repository near you, for a project that uses C, Python or Java.

Review bug fix commits. See how many of them relate to "business use cases" and how many relate to implementation bugs. I believe you'll find the latter is far more common.

Even in the "business use cases", enforced invariants will be a tramendous help. In the infinite possibilities of all the use cases, none will be able to break any invariant forced by the type system.

When you set out to prove a property of your program, you will end up finding bugs, almost regardless of what properties you are trying to prove.

Writing unit tests is also useful. But if I can have 10 lines in my Red Black Tree that mean I don't have to write any test whatsoever for the tree's invariants -- I saved myself from a whole lot of work writing and later maintaining tests with every change.

Generally, to get similar confidence levels from unit tests as you get from types, you'll need to write many more tests. If I had to choose whether to trust a well-typed system written in Agda (which is similar to Haskell but has an even more powerful type system) with only the most trivial testing done, or trust a highly tested system written in dynamic or a much weaker type system, I'd definitely trust Agda more.

Or if I were to trust my 10 lines of type code or hundreds of lines of tests for the invariants of the tree, of course the 10 lines of code are far more reliable and easier to maintain.


Honestly, in my case, for my day job, it's way, way more "business use case" or more frequently "misunderstanding between 2 services" than it is an implementation error. We catch 90% of implementation problems with unit testing and/or just plain sanity checking it before release. Maybe Haskell could help us by making unit testing easier/unnecessary but of course there's no switching at this point.

We're probably a bit of an edge case being a very service oriented architecture. (1,000 servers, 6-7 major classes of server, handling 10B (yes B) requests a day). Most of our bugs consist of a flawed assumption that crosses 3-4 service boundaries on it's way off the rails. I'll admit I'm ignorant of Haskell but I just don't see a type system fixing that for us.


Did you actually look at commits to reach this conclusion?

Also, do you commit only/amend after doing extensive testing? Or do you also commit the results of debug sessions as separate commits?

People have various biases that make them tend to remember some things and forget others. It is easy to have 100 boring implementation bugs, 1 interesting bug, and then end up remembering that you have more interesting bugs.

Also, can you give me a couple of examples of "flawed assumptions" across services?


Well, for example, yesterday I was doing some UI work on a project I'm totally unfamiliar with. My bugs were caused by bad SQL, a null pointer exception, and some JS silliness. Most of my time was sucked up in figuring out a requirement. The NPE took about 10 minutes out of my day, and the fix for it never made it into a commit message because it was, write some code, run it, oh shit forgot to initialize that, fixed.

Flawed assumptions across services tend to have to do with rate limiting, configuration mismatches, what happens when one class of service falls behind and queues fill up, stuff like that.


What kind of bad SQL? Most forms of bad SQL can be ruled out by a well typed DSL. JS silliness is also a type safety issue. You can use Fay, Elm, GHCJs or Roy to generate Javascript from a type-safe language.

Some NPE's take just 10 minutes (not negligible) but there are also some that are expensive.

Figuring out the reason is sometimes hard, when the code and its assumptions are badly documented.

Fixing NPE's is sometimes hard because of silly reasons such as touching third party or "frozen" code.

Also, NPE's can become extremely difficult when the code is bad in the first place.

Things like rate limiting and queue lengths can be encoded in types. You can use type tagging on connection sources/destinations to make sure you only hook up things that match rates/etc.


Man, just when I thought we might agree on something.

A DSL? SQL IS a freakin DSL. Why would I put another layer of abstraction between me and it? Just more places for things to go wrong.

http://en.wikipedia.org/wiki/Inner-platform_effect

Anyways that particular problem yesterday wasn't a compilation problem, it was due to my own misunderstanding of some pre-existing data.

You're proposing a more-complicated way of doing things with the idea that eventually we'll get to the promised land and things get simple again. I've just never seen it happen. Seen the opposite plenty of times.


SQL is a DSL indeed, but building SQL strings programmatically is an awful idea. You should build SQL queries structurally. The DSL to build SQL should basically be a mirror of the SQL DSL in the host language.

The DSL will guarantee that you cannot build malformed queries. It can also give guarantees you design it to give.

I'm not talking about replicating a database -- but just wrapping its API nicely in a type-safe manner.

I am proposing wrapping type-unsafe APIs with type-safe API's. This adds complexity in the sense of enlarging the implementation. But it also adds safety and guarantees to all user code.


Hmmm... not quite a flashback. If it were a flashback, you would have written "C" instead of "Haskell", and would have been thinking of assembly language rather than C or C++.

You have the same problems no matter what; the question is whether you want a tool that helps with them, or one that doesn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: