Hacker News new | past | comments | ask | show | jobs | submit login

There are plans to research an optional borrow checker for Hare. Hare also does offer many "safety" advantages over C: checked slice and array access, exhaustive switch and match, nullable pointer types, less undefined behavior, no strict pointer aliasing, fewer aggressive optimizations, and so on. Hare code is much less likely to have these errors when compared to C code.

I would ultimately just come out and say that we have to agree to disagree. I think that there is still plenty of room for a language that trusts the programmer. If you feel differently, I encourage you to invest in "safer" languages like Rust - but the argument that we're morally in the wrong to prefer another approach is not really appreciated.




I think a section on safety might be worthwhile. For example, Zig pretty clearly states that it wants to focus on spatial memory safety, which it sounds like Hare is going for as well.

That's certainly an improvement and worth noting, although it obviously leaves temporal safety on the table.

> but the argument that we're morally in the wrong to prefer another approach is not really appreciated.

Well, sorry to hear it's not appreciated, but... I think developers should feel a lot more responsibility in this area. So many people have been harmed by these issues.


> I think developers should feel a lot more responsibility in this area.

I think most programmers would agree with that sentiment. Getting everyone to agree on what is "responsible" and what isn't however...

Hare is a manifestation of the belief that in order to develop responsibly, one has to keep their software, and their code, simple.

An example of what I mean by this: An important feature of Rust is the use of complex compiler features in order to facilitate development of multithreaded programs and ensure temporal safety. In Hare programmers are encouraged to keep their software single threaded, because despite features like Rust's, concurrent programs turn out much more complex to write and maintain than sequential ones.

Keeping software single-threaded also eliminates many ways in which a program could fail due to lack of compiler enforced temporal safety.


Single threaded development seems a noteworthy goal, and I partially agree that it often leads to much simpler code and works well in systems like Erlang. But it is also a questionable focus in the days of barely increasing single core performance, especially in a systems language.

I believe one of the reasons Rust got so popular is that it made concurrency much easier and safer right at a time where the need for it increased significantly.

If that is the recommendation, maybe the standard library could focus on easily spawning and coordinating multiple processes instead, with very easy to use process communication.


Unfortunately you can't make things faster by making them concurrent, at least not in the way computers are currently designed. (And they're probably designed near-optimally unless we get memristors.) In my experience it's the opposite; you can make concurrent programs faster by removing it, because it adds overhead and they tend to wait on nothing or have unnecessary thread hops. And it makes them more power efficient too, which is often more important than being "faster".

Instead you want to respect the CPU under you and the way its caching and OoO instruction decoding work.


It sounds like Hare has a philosophy around safety here and it's just not documented - at least not where I could find it, scrolling around a bit.

I did find this page, though: https://harelang.org/blog/2021-02-09-hare-advances-on-c/

Which I found very interesting.


That’s just not realistic. We live in a complex world, and we need complex software. That’s why I vehemently hate these stupid lists: http://harmful.cat-v.org/software/

Don’t get me wrong, it is absolutely not directed at you, and I absolutely agree that we should strive for the simplest solution that covers the given problem, but don’t forget that essential complexity can’t be reduced. The only “weapon” we have against it is good abstractions. Sure, some very safety critical part can and perhaps should be written in a single-threaded way, but it would be wrong to not use the user’s hardware to the best of its capability in most cases, imo.


In a world where 99% of software is still written to run on 4-16 core machines and does tasks that any machine from the last 10 years can easily run on a single thread if it was just designed more simple instead of wasting tons of resources...

I'd wager that most of the applications that need to be "complex" in fact only haven't sorted out how to process their payloads in an organized way. If most of your code has to think about concurrent memory accesses, something is likely wrong. (There may be exceptions, like traditional OS kernels).

As hardware gets more multithreaded beyond those 16 core machines, you'll have to be more careful than ever to avoid being "complex": when you appreciate what's happening at the hardware level, you'll start seeing that concurrent memory access (across cores or NUMA zones) is something to be avoided except at very central locations.

> essential complexity can’t be reduced

I suggest looking at accidental complexity first. We should make it explicit instead of using languages and tools that increasingly hide it. While languages have evolved ever more complicated type systems to be (supposedly) safer, the perceived complexity in code written in these languages isn't necessarily connected to hardware-level complexity. Many language constructs (e.g. RAII, async ...) strongly favour less organized, worse code just because they make it quicker to write. Possibly that includes checkers (like Rust's?) because even though they can be used as a measure of "real" complexity, they can also be used to guardrail a safe path to the worst possible complex solution.


> I suggest looking at accidental complexity first. We should make it explicit instead of using languages and tools that increasingly hide it.

The languages that hide accidental complexity to the greatest extent are very high level, dynamic, "managed" languages, often with a scripting-like, REPL-focused workflow. Rust is not really one of those. It's perfectly possible to write Rust code that's just as simple as old-school FORTRAN or C, if the problem being solved is conducive to that approach.


I don't see it as an improvement, because they are taking what Modula-2 already offered in 1978 in terms of systems programing safety.

I guess not having to type keywords in caps (because who uses IDE/smart editors) and having a C like syntax is the selling point.


I don't really want to engage with the RESF. We have the level of safety that we feel is appropriate. Believe me, we do feel responsible for quality, working code: but we take responsibility for it personally, as programmers, and culturally, as a community, and let the language help us: not mandate us.

Give us some time to see how Hare actually performs in the wild before making your judgements, okay?


I'm a security professional, and I'm speaking as a security professional, not as an evangelist for any language's approach.

> Give us some time to see how Hare actually performs in the wild before making your judgements, okay?

I'm certainly very curious to see how the approach plays out, but only intellectually so. As a security professional I already strongly suspect that improvements in spatial safety won't be sufficient to change the types of threats a user faces. I could justify this point, but I'd rather hand wave from an authority position since I suspect there's no desire for that.

But we obviously disagree and I'm not expecting to change your mind. I just wanted to comment publicly that I hope we developers will form a culture where we think about the safety of users first and foremost and, as a community, prioritize that over our own preferences with regards to our programming experience.


I am not a security maximalist: I will not pursue it at the expense of everything else. There is a trend among security professionals, as it were, to place anything on the chopping block in the name of security. I find this is often counter-productive, since the #1 way to improve security is to reduce complexity, which many approaches (e.g. Rust) fail at. Security is one factor which Hare balances with the rest, and I refuse to accept a doom-and-gloom the-cancer-which-is-killing-software perspective on this approach.


You can paint me as an overdramatic security person all you like, but it's really quite the opposite. I'd just like developers to think more about reducing harm to users.

> to place anything on the chopping block in the name of security.

Straw man argument. I absolutely am not a "security maximalist", nor am I unwilling to make tradeoffs - any competent security professional makes them all the time.

> the #1 way to improve security is to reduce complexity

Not really, no. Even if "complexity" were a defined term I don't think you'd be able to support this. Python's pickle makes things really simple - you just dump an object out, and you can load it up again later. Would you call that secure? It's a rhetorical question, to be clear, I'm not interested in debate on this.

> I refuse to accept a doom-and-gloom the-cancer-which-is-killing-software perspective on this approach

OK. I commented publicly that I believe developers should care more about harm to users. You can do with that what you like.

Let's end it here? I don't think we're going to agree on much.


> There is a trend among security professionals, as it were, to place anything on the chopping block in the name of security.

I really have to disagree on this, in spite of not being a security professional, because the history has proven that even a single byte of unexpected write---either via buffer overflow or dangling pointer---can be disastrous. Honestly I'm not very interested in other aspects of memory safety, it would be even okay that such unexpected write reliably crashes the process or equivalent. But that single aspect of memory safety is very much crucial and disavowing it is not a good response.

> [...] the #1 way to improve security is to reduce complexity, [...]

I should also note that many seemingly simple approaches are complex in other ways. Reducing apparent complexity may or may not reduce latent complexity.


History has also proven that every little oversight in a Wordpress module can lead to an exploit. Or in a Java Logger. Or in a shell script.

And while maybe a Wordpress bug could "only" lead to a user password database leaked but not the complete system compromised, there is a valid question which is actually worse from case to case.

Point is just that from a different angle, things are maybe not so clear.

Software written in memory unsafe languages is among the most used on the planet, and could in many cases not realistically replaced by safer languages today. It could also be the case that while bug-per-line might be higher with unsafe languages, the bang-for-buck (useful functionality per line) is often higher as well (I seriously think it could be true).


Two out of your three examples are independent to programming languages. Wordpress vulnerability is dominated by XSS and SQL injection both of which are natural issues arising from the boundary of multiple systems. Java logger vulnerability is mostly about the unjustified flexibility. These bugs can occur in virtually any other language. Solutions to them generally increases the complexity and Hare doesn't seem to significantly improve on them over C probably for that cause.

By comparison memory safety bugs and shell script bugs mostly occur in specific classes of languages. It is therefore natural to ask for new languages in these classes to pay more attention to eliminate those sort of bugs. And it is, while not satisfactory, okay to answer in negative while acknowledging those concerns---Hare is not my language after all. Drew didn't, and I took a great care to say just "not a good response" instead of something stronger for the reason.


> #1 way to improve security is to reduce complexity

If managing memory lifetime is an inherently complex problem (which it is), the complexity has to live somewhere.

That somewhere is either in the facilities the language provides, or in user code and manual validation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: