Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Even though theoretically, type theories and type systems are not enough to prevent all the problems in logic and programming, they can be improved and refined to prevent an increasingly number of problems in the practice of logic and programming."

It's actually just a belief. Nothing suggests that type systems and type theories can be improved to be practical at preventing bugs. I'd say it's the opposite, even with as much understanding about the nature of bugs as we have today, they don't look very promising, unlikely to make it even into the top ten of other different approaches.



Incendiary though the phrasing is, I'm halfway inclined to agree with you. There's a lot to be said for a solid, simple strong typing system. But some of the more sophisticated type systems that I either seen or read research literature about (I had a colleague who was doing PhD research in 'refinement types') are seem like they are hitting diminishing returns for programmer ease-of-use while catching an ever-diminishing number of bugs.

If you allotted me a finite amount of effort to put a codebase (a) into a strong type system, (b) festoon it with lots of pre- and post-conditions and asserts, (c) run it through every static checker under the sun, (d) build a huge suite of tests, (e) find a way to formally verify critical algorithms in it with (say) Z3, (f) carry out fuzz testing, ... I'd probably say "do the easy stuff from most of these categories" rather than "pick the One True Path and figure that that will save you from all your bugs".


There are type systems that prevent race issues, use after free, and null pointer exceptions.

So saying that they aren't practical at solving bugs is a little disingenuous.


Well... how much of actual software is written using those type systems? Less than 1%? So such type systems might be able to prevent bugs, but in practice, they don't.

Why aren't they used? Probably existing code bases, inertia, and ignorance play a role. I suspect, though, that at least part of the problem is that most programmers find those type systems too hard to use. In that sense, the type systems aren't practical.

And if you're going to blame the programmers for that, well, if your plan for making programming better requires different programmers than the ones we have, your plan isn't very practical, either.


Today less than 1% of all software is written in null safe languages. But I believe Swift has non-nullable types, and it's the promoted language for a really big ecosystem. There's also Rust and Scala, but those have less of a captive audience. I do have high hopes for Rust, which also is data-race safe.

I think Java 8 and optionals show it doesn't have to be that hard, it's just that there's too much old code that relies on nulls for the Java ecosystem to ever be fully null safe.

Use-after-free is solved in a language without manual memory management, so that's actually quite common.

Programmers get comfortable with new ideas over time. Higher order functions and type inference used to be obscure concepts. Today they're par for the course. I don't know if we'll all use dependent types some day, but I think we'll keep getting more powerful types in mainstream languages for a while.


Kotlin is plenty practical. It was born out of a desire for a safer yet practical language on the JVM (provides null safety at the type level). It is at least practical for it be getting gaining lots of traction.

Rust is another good example. It's not very ergonomic, but it is getting better every release.

I think how those two languages do will show whether type systems can me made practical for reducing wider classes of bugs. They seem practical and have the backing to help drive adoption.


> So such type systems might be able to prevent bugs, but in practice, they don't.

Rockets might be able to carry humans up to space, but in practice they don't because only a small set of humans actually get to go.

Isn't that a slightly absurd interpretation of "in practice"?

Why these languages aren't used may have absolutely nothing to do with their technical merits. It's a myth that technical merits is the only consideration for language popularity.


Could you say what type systems those are? Thanks.


To protect against NPEs you need to make nullable and non-nullable types different and forbid the programming from trying to use a nullable value without doing a null check first (doing a null check returns a non-nullable reference in case of success). One example of this is any of the languages with Albegraic Data Types, where it is super easy to create an Option type that encapsulates this concept.

For an example of preventing use-after-free there is the typesystem used in Rust. It is based on the theory of linear types, which lets you have operations that mark a value as "used" and forbid you from using it again after that point. The same system is used to protect against data races because you can guarantee that a value is only accessible from one thread at a time.


Thanks, this should make for some interesting weekend reading.


I'm assuming parent is thinking of Rust. And more generally type systems relying heavily on linear types (I believe).


Rust's borrow checker isn't a type system, is it? Sure, its benefits are similar to a linear type system, but actually it's a separate compiler pass whose internals are described imperatively and don't look type-based to me: https://github.com/rust-lang/rust/blob/master/src/librustc_b...

If anything, I agree with zzzcpan and disagree with swsieber. Types are nice but people oversell them. OO languages appeared at the same time as ML family languages, but one got successful and the other got stuck in the realm of "new ideas".


It wouldn't be possible without an affine (at least) type system such as Rust's.

Affine logic rejects contraction, i.e.

     Γ, A, A ⊢ B
    -------------
      Γ, A ⊢ B
My intuition (so take a liberal dose of salt) is that this pretty directly translates to disallowing reuse: we can't see a type twice and continue (inference) as if we saw it once.

Of course, that's very hand-wavy, and doesn't say (as I believe is the case) that there wouldn't be some other way to proceed through a combination of other rules.


It could be at least argued that type systems aren't very effective at reducing meaningful bugs and/or aren't worth the costs they impose, but you went way beyond that argument. As I see it, the only way you can believe that "nothing suggests that type systems...can be improved to be practical at preventing bugs" is because you have willfully ignored or completely discounted every bit of evidence you've encountered that suggests otherwise.


> It's actually just a belief. Nothing suggests that type systems and type theories can be improved to be practical at preventing bugs

Seems you don't know much about types then. I suggest you look up theorems provers and compcert and the TyPiCal language, as but a few examples.


"Practical" is the key word here, I believe.


A key word that had been disproven many times. The most recent example is Rust, which has a type checker that now type checks many previously unsafe C++ idioms. The other examples I listed are also quite practical, and have been used in high assurance systems.


What are the other approaches for preventing bugs you have in mind that are novel or increasing in application?

We've done testing since the start. Still many bugs. We've done ad hoc modeling (behavior driven) for years with some improvement. Formal methods aren't popular but are successful at least at some (small to the low end of medium) scales or within portions of large scale systems. Type systems can and have been used for codifying concepts from all of these into the program semantics. So I'm not sure how it is that they don't help.


> We've done testing since the start. Still many bugs.

Exactly, as the OP clearly doesn't understand, but every researcher in programming languages does, test can only prove the presence of bugs, it cannot prove their absence. Types can.


what ive found most effective at finding bugs is to have two systems, the data represented in two different type systems, then cross check them on crucial intervals. the second most effective at finding bugs is to check function parameters both when entering the function and when returning from the function, for bound, range, etc. and third... study and understand the code, actually writing code that is easy to coprehend. then tests are very good at preventing regressions. despite this, there will still be bugs though smile so debuability, effective debugging is really important!


I guess you're just ignoring the fact that every single programming language in use today has some kind of type system? Are you clamoring for processors that don't even bother to distinguish between register sizes?

Type systems are a necessary part of computing. One can even define computing in terms of how types are transformed. Types are an extremely primitive and fundamental description of how computation happens, and without them, it'd be hard to imagine how anything could be computed at all, let alone correctly.


Lolwut?


He's right. The current type theory crazy is just another in a long line cargo cult programming fads. First it was pure OOP for everything, then it was pure FP for everything, and now it's types for everything. Yes they can be useful, but it's disingenuous to act like they're a cure-all. Most bugs aren't type related, and you're adding additional mental overhead with these extremely elaborate type systems.


I find that a significant fraction of the bugs introduced to the code bases I've worked on that have dynamic typing are due to issues that even a rudimentary static typing system would prevent at compile time. I'm struggling to grok how a language with no type system might look. Something post-modern and Picasso-esque? To use data in a meaningful way in a program it must be possible to reason about it, and that requires that it have at least some structure.


Assembly languages are untyped. Nothing prevents you from reinterpreting a memory address as a type it is not. (Except possibly alignment issues.)


That's not quite right though, is it? Nothing prevents one from "creative" interpretation of memory in, say, C, either, but it would be difficult to argue C is untyped.


The C type system prevents you from accidental creative interpretation of memory. You wouldn't say Rust is untyped just because of the `unsafe` keyword either.

No such type safety exists in assembly, except that certain opcode-register combinations are prohibited.


Types aren't new. As the article discusses they date back over a century at least in math and logic. Within programming, we've had them in every major language for decades. The first big push for strong and expressive type systems is from the functional programming work which led to the ML family and the work that made Pascal and Ada on the imperative (and later OO variants) side, which dates back some 40-50 years now.


What is "he" right about, exactly? You say "yes they can be useful", but that's not the impression I get from OP. I am also confused because the very quote OP is responding to states that "...type systems are not enough to prevent all the problems in logic and programming...". Do you consider that acting like type systems are a cure-all?


> Most bugs aren't type related, and you're adding additional mental overhead with these extremely elaborate type systems.

Are you? usually I find it to be the reverse - you're leveraging the type system to track some properties for you, so that you don't have to track them mentally. I.e. less mental overhead (though possibly more boilerplate-overhead)


For me, the elaborateness of the type system and it's checker REMOVE the overhead. YMMV, obviously.


I think you misread OP, and also misread the comment he was replying to; or, you just needed an excuse to vent about type systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: