Hacker News new | past | comments | ask | show | jobs | submit login

Personally, I don't think the gist posted by tel qualifies as "insanely complex" by any stretch.

But a more serious question is: how do you know if "will generally work" applies to your particular case? The short answer is that you don't and you end up having to test library-provided functionality as part of your own test suite. You can argue that you'd have to do the same in e.g. Haskell, but really the surface area of potential failures is hugely reduced by the fact that you know whether a function "f: a -> b" can have side effects and that given something shaped "a" it will give you something shaped "b" (modulo termination, but that applies in any non-total language).

This is not merely a theoretical issue: Compare the amount of documentation you need when using some random library in Haskell vs. e.g. JavaScript. The types act as compiler-checked documentation. In JavaScript it's really a crapshoot if the library has sufficient documentation and it's always specified in an ad-hoc manner (which naturally differs from library to library).




I think if you sit down with the code clojure uses to implement transducers and compare it directly against the haskell code and _don't_ come to the conclusion that the clojure code is simpler, there might be some motivated cognition at work. I don't think it's trivial for someone with Haskell experience to even verify that code implements the same (or a similar) thing.

I also believe it lacks sufficient polymorphism, for instance surrounding the output of the step function, and lacks safety around (notably, but not exclusively) the application of the step function (i.e. to only the last value of the transducer, not just something of that type). So this would be squarely in the tries-to-be-simple category, despite it's use of profunctors (don't know why that was used here, it's not a super-standard abstraction).

But this is all beside the larger point. Things generally working is learned through philosophical induction in the case of clojure -- just seeing something work a bunch of times and understanding the mechanisms in some level of detail. That's not the same as having a machine-verified proof, but it's also not the same as not knowing at all.


It depends on what you mean by "simpler", I suppose.

> there might be some motivated cognition at work.

Did you really just say that?

> I don't think it's trivial for someone with Haskell experience to even verify that code implements the same (or a similar) thing.

No, not to verify that it does the same thing. For that you'd have to understand exactly what the Clojure version does too. I'm a quite rusty on Clojure, so I can't make a fair comparison on how easy it is to understand vs. the Haskell version. However, you'll note that I didn't actually say anything about the Clojure version being harder to understand.

In fact, my point is that I don't even have to understand the implementation of the Haskell version: I just have to understand its interface (i.e. the types) and have a general idea of what it's supposed to do (in fuzzy terms).


Outside of the opinions being expressed, some technical comments.

1. I'd like to learn more specifically what kind of output polymorphism you would like. Right now the outputs are universally quantified, but linked. I've written it in other ways as well, but I could not find any reason in practice to use that added complexity.

In particular, the universal quantification does force you to use the output of the step since there is no other way (in scope) to produce values of the needed type. For that use case at least the RankNType is exactly what you want.

2. Profunctors are not really needed at all. It was more to demonstrate that Moore and T have nice structure. The same holds for Category (and my note about Arrow). In all cases, this is just giving nice organization to regular properties of T and Moore.


However, it only helps if you thoroughly understand the type system. Since Haskell has a rather complex type system, the relative lack of documentation in English is a barrier to entry for the uninitiated. (Also, aren't the types often omitted due to type inference?)


It's actually considered bad practice to omit types in Haskell for top level declarations. Unless needed, it's typically best to omit them _within_ declarations, though. I am not entirely sure why this is where the community landed, but you could get away with e.g. only providing types explicitly for exported functions, and no one would really care, I think. You can always use your IDE/repl to just tell you the types whether they are explicit or not.

[EDIT: This didn't really address your core point. I'm not sure how to do that, precisely, but here is a shot. I think the ideal Haskell use case is active collaboration between the programmer and the compiler. The lack of English language docs is seen as less important than clear contracts within that interaction. I think the descriptive statement of "If you try to operate without the compiler and reason using English language understandings, Haskell will be more frustrating for you than it is for people in the Haskell community" seems both true and fair. Suffice it to say, most understanding of actual things is most easily expressed in natural language, because most communication happens in natural languages, so that's just how we hear about those ideas.]


Yes, I agree. This dialog with the compiler may help library authors and library callers, but it doesn't help readers. A beginner is likely to be reading code on the Internet, in tutorials, blog posts, on stack overflow, or in papers. Perhaps a particularly active, determined reader might also try out the code they read.

It's common to automatically syntax-highlight online code but not to type-annotate or cross-reference it (adding links between declarations and usages). But perhaps the tools will get better.


Another way to say this is that Haskell encourages what Rich Hickey would call "guardrail programming". The types give you hints as to how the different pieces of the program piece together. When you finally get everything to compile it should hopefully work straight away.


That is definitely true and a flaw. I believe there's a good middle ground to be had when one considers the difference between a tutorial and documentation. The docs really are almost best left to types in the detail (they're machine checked, highly informative, and the reality of what you'll be handling when you use the code) but many higher level details are difficult (but not impossible) to intuit from the types alone. Further, these "intentionalities" are likely more stable to changes and thus would be served well as documentation.

But as usual, people dislike writing docs. Nobody dislikes having a new contributor write docs for their code, though :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: