This seems nice for prototyping, since it'd allow a Lisp-style approach to changes and refactoring, where you can incrementally refactor, and run the intermediate results. Traditionally that's been difficult in Haskell, because if you e.g. change the type of a function, you have to change or comment out all the code that calls that function before it'll compile. That's particularly annoying when doing sketchy speculative prototyping, during which it's common to frequently change your mind about how things should be put together.
I have found it ironic that Haskell packages seem a lot more brittle than packages in a language like Ruby. It's possible to get into "dll hell" with gems of course, but my limited experience fiddling around with Cabal suggests it can be much more difficult to get exactly the right set of versions in Haskell.
Maybe this means that there are a lot of undetected bugs lurking in the typical gemset but at least you can get something off the ground.
The difference seems to be that languages with dynamic type systems or very simple static type systems aren't as difficult as languages that encode a lot more information into their types.
Maybe it's not fair to generalize from just those few examples though.
cabal-install leaves something to be desired in the dependency management area. There are active projects under way to improve the situation, largely borrowing from Ruby along the lines of rvm and bundler.
> ...if you e.g. change the type of a function, you have to change or comment out all the code that calls that function before it'll compile.
I thought so for a long time too, but then I discovered `undefined`. You can replace any expression with undefined and if there's a type, it will compile, and explode at runtime when it tries to evaluate it.
Maybe a year ago I would of been very against a feature like this. But now, working on a Haskell code base for quite some time that has grown I have wished for something like this many times. Most often it is when changing intra-module data structures and functions just to try something out, and I do not want to update the rest of the module quite yet. Sure I could comment out the offending code and replace with undefined, but that is essentially what this switch will do for free. So I for one am looking forward to this feature.
Looking at the comments, I am shocked how many people are concerned that people will ship things with type errors into production or this would somehow relax the standards of Haskell programs, going to the length to suggest that some short of sabotage should be levied when the flag to enable this is on.
This is a very good, and rather unique feature. I have wished for an equivalent when figuring out some stuff in C programs for long time; I cannot imagine a person who has to maintain and change a large program and cannot understand the huge utility of this device.
I don't buy it. Why have erroneous code that is not used anyway in your program? Why not comment it out? If you still need to have that code, there's an easier way to typecheck -- use 'error'. (I don't assume dons doesn't know that, though). There are so many good features one can add to haskell and the surrounding eco-system, but turning off the static type system is not one of them.
EDIT: Okay, I can see the point. The linked ticket http://hackage.haskell.org/trac/ghc/ticket/5624 gives a better motivation: being able to load a module that doesn't type check in GHCI and view inferred types, and, maybe, invoke some isolated functions. But why not just limit it to GHCI, though?
And, FFS, would SPJ and Co. please stop breaking core libraries and tools with minor releases? We had to wait for cabal-install to be ported for 7.2 and 7.4 for a couple of months, at least. Why have the bloody Haskell-Platform if you can't keep up with the compiler releases.
> Why have the bloody Haskell-Platform if you can't keep up with the compiler releases.
The HP doesn't chase the compiler. It is supposed to mean stable 6 monthly dev cycles, independent of what GHC HQ is up to. It is explicitly not about chasing the bleeding edge GHC.
I wasn't talking about the bleeding edge GHC -- merely the stable releases available to download from the official web-page with both sources and binaries. If that's "bleeding edge", it should clearly be specified as such. Anyway, I don't even use HP -- which leaves cabal-install as the only viable option. And why do I even not want to use the GHC version which is not in HP? Well, surprise-surprise, it turns out that Hackage is more than content to use the "bleeding edge" compiler to build packages and will happily report all the compilation errors (instead of using the usable baseline version included in HP). Well, it's a sore topic to say the least. Anyway, sorry for the offtop, I think we should get back to discussing why would GHC want to become a Python interpreter.
Sure, for small programs, that's probably a better approach. But this could really come in handy when your program is split across 100 files and you change a central data type. Now you can update and test in batches without having to refactor 1000s of lines in one go.
A common purpose of changing something is fixing something that was done wrong.
Consider, for example, fixing the Monad class to subclass Applicative. Or to remove the "fail" or (>>) methods.
This would incur a change to many thousands of lines of code.
Similarly, a core/base type of a whole project can be used across an entire codebase. It is possible to discover desirable changes to such a base type, and it will incur a huge change.
Yes, things like changes to core library types are disastrous. But, if you have a core data-type that you know will change and you haven't isolated the impact of these changes by, say, using accessor functions, you're the one to blame. See my long-ish reply in the sibling thread for details. And, yeah, I've learned all this the hard way :(
One of the nice things about Haskell, is that changes that are disastrous in other languages are not necessarily disastrous.
I think any notion of blame is really irrelevant here. The discussion was whether this kind of feature is useful. I think you implied it wasn't, because the cases it were useful in are those in which you did something wrong. But that does not follow at all. It is very possible one is to blame for a horrible mistake in the codebase, and that it still needs and can be fixed. This feature makes that fix cheaper and more practical.
I mentioned earlier that I found the GHCI use-case extremely useful (see my root comment after EDIT). But I have an issue with the uses like turning off the typesystem and have the project compile just so that you could feel good about yourself. I don't believe that would help your bottom-line. But, as long as I don't have to work with you or your software, I don't care.
And, large refactoring efforts are necessary in every complex and evolving system, irrespective of your language of choice -- but turning off your type checker or any other static guarantees isn't the way to go. Programming is hard enough even in the presence of type-checking and automated analyses -- why make it even harder? Anyway, I feel like I've made my point enough -- take it or leave it.
> But I have an issue with the uses like turning off the typesystem and have the project compile just so that you could feel good about yourself
That's a strawman. The claim is that it is useful to be able to compile a partially-valid program so that you can:
* Test actually running stuff
* Get the inferred types of expressions
The idea is to temporarily turn it off just to run some tests or infer some types, and then turn it on again for the rest of the work.
I think that you're arguing against a point that no one is making. Everyone agrees you should not use this feature for any other purpose except to temporarily allow some tests and exploration.
Hmmm, you're probably right.
I had in mind something like GenStgExpr[1] that make up the STG type in GHC. I'd imagine with all the optimisations, serialization and code-gen that is based directly off that data structure, any significant change will have at least 1000 line nock-on effect. Perhaps I'm over estimating though. I'd guess it comes down to the size of the project.
GHC is a bit of a pathological case, which doesn't make it less valid. I wouldn't really dare to estimate how much refactoring you need to do if you change, say, one constructor of GenStgExpr, primarily, because I don't know the GHC codebase. In my work I have had instances when I had to change datatypes drastically and refactor the code accordingly. I'd say, in all cases the type system was helpful -- I just go to every error location (very easy with, e.g., ghc-mod), and it's invariably a pattern-matching case, so I fix a bit of code here and there (20 LoCs at most) and it all worked. But the thing is, of course, if you change your datatypes all the time even these kind of small changes will soon become tedious. But then if one programs in Haskell in a hacking mode ("meh, let's try throwing this thing in, maybe it will miraculously start working") -- it just won't work. Haskell requires one to think well, before programming -- that's why I love it. I think, in the end it saves time.
Another problem might be that the datatype you are refactoring is fundamental to your program and is used in most modules -- I think it's just short-sighted design. GHC, for example, has several different program representations (Haskell, STG, Core, maybe some others), so a change to one representation would only affect a part of the compilation pipeline (namely, the translation to and from that rep and the associated optimisations/analyses). But if you really need such a fundamental mega-type that will change during the lifetime of the program, why not write an easier to manage interface to it? Either use the record syntax or write setter/getter functions by hand. Just $ my $0.02 where my=id
I'm just wondering how soon someone will push some haskell into production with this flag in place because they can't be bothered to track down all their type errors.
Prod builds would explicitly disable this (and other dangerous flags) with `-Wall -Werror` and friends. Shipping with this on is like shipping with incomplete patterns, which would be caught by `-Wall`.
Summary: useful for developing, capital offense for production code.
I'm wondering how soon someone will push some code into production without writing unit tests because they can't be bothered to write unit tests.
People will write bad code regardless of language features. Ignore those people and worry about how people writing good code will use your new feature. If it saves them time, makes their life happier, or lets them write safer code, then it's a win. Similarly, if a feature helps bad programmers avoid today's bad-pattern-du-jour but prevents good programmers from writing good code, you should think twice before adding it. In summary: ignore bad programmers, they can't be saved by language features.
"In this mythical, not yet-existing, but clearly on-the-horizon "Haskell", you'll be able to choose how much safety you want. You'll have "knobs" for increasing or decreasing compile-time checks for any property and invariant you desire."
It might not be "Haskellish" or safe in the way that I or some others are used to, but it does seem to be a clear increase in the expressiveness of the language.
Until "expressiveness" has a defined meaning, I encourage us all to stop using it to describe programming languages. We just beat each other up with it without really saying anything meaningful.
Programs that were right before continue to be right. Programs that were wrong before continue to be wrong. What's changed is that programs which were wrong before can now be wrong at runtime rather than at compile time. The parts of the program that don't explode now wouldn't have exploded before. So I don't think that really changes the expressiveness, whatever that means.
I think this change will do wonders for Haskell marketing but I don't think it will have much effect on the day-to-day lives of Haskell programmers.
This seems like it will be great for Light Table and other such systems. It will be much easier to do the incremental compile//show results on incomplete and in-progress files if type errors can be ignored.
Perfectly valid question. This explains the practice in incremental dev or refactoring of using "undefined" in stub/placeholder functions to get the type signatures but not function bodies in:
I've never understood this rationale. Pretty much any feature can be abused by "less experienced" developers. I think you should never leave out useful features just because they could be abused--trying to protect developers from their own incompetence is never going to be completely successful and is rather arrogant at that.
I could see leaving out or modifying features that lead to a lot of mistakes, like manual memory management, but this is just a flag useful for debugging--you never have to use it and it does not affect your code at all if you don't use it.
Also, this feature is more like replacing unsafe functions with undefined rather than making everything dynamically typed. For example, it will always error if you run a poorly typed function, regardless of what argument you pass in. And, of course, if you're worried about your co-workers using this flag, you can just recompile without it and fix all the errors.
How? Your continuous integration system won't have this flag on, so their non-typechecked code will never reach production. What they do while developing is not much of a concern; this is simply a faster way to put "--" in front of a lot of lines of code.
This is not about making Haskell dynamically typed. It's about not type-checking code that never runs.
Is Haskell suffering from a research language problem where the only things considered valuable are things that can be made into papers, i.e. only new additions?