Hacker News new | past | comments | ask | show | jobs | submit login
The Hare programming language (harelang.org)
342 points by ddevault on April 25, 2022 | hide | past | favorite | 310 comments



Happy to see “secretlang” out in the open! I have been following development from afar and really like the idea of a “simple” take on a language, but with a lot of wisdom and affordances added since the state of the world back in 1972. In short, Hare seems to be a language that I really should like and I want to take it for a spin.

What I find a bit confusing though is the licensing. I belong to the “BSD school”, but can at least claim that I understand (and respect) how GNU looks at things. Looking at Hare’s licensing [1], I am somewhat stumped and my usual modes of thinking no longer apply. What is the threat that warrants this level of complexity? A commercial fork? A community fork? Proprietary drivers? Are these really realistic enough threats to warrant this level of license complexity rather than just stamping ISC and CC-BY on the whole thing and not worry about forks at all? Yes, there is some writing in the README, but perhaps I am too thick to “get it”?

[1]: https://git.sr.ht/~sircmpwn/hare#licensing

Lastly, a sanity check, am I reacting too strongly to this if I feel that it has a bit of chilling effect in terms of my excitement?


This licensing regime is copied almost directly from GCC. The compiler part is GPL and the part that gets embedded into your program isn't. GPL-licensed compilers need to do some kind of dance to allow the outputs from the compiler to be free of GPL restrictions, because typically some components distributed with the compiler are linked into the executable. In this case it's the standard library, so the standard library is MPL instead of GPL. Nothing to be concerned about here. I don't see anything unusual on this page.


The license seems straightforward. You can use and distribute software written in Hare, but if you want to link or make changes to the tooling (compiler, etc) you must open source those.


Yes, legally it is perfectly clear. Just not the why. Is there any other language that has taken a similar approach? Has this been discussed in relation to Hare somewhere apart from the brief chunk of text that I linked? I am perfectly accepting that I may be in error here in terms of how I view licensing, but it bothers me that I can not build a mental model of the intentions of the authors.


It seems basically equivalent to gcc/glibc, which I think is a good default.


The licensing model is essentially designed in deference to users who want to vendor standard library modules in their projects. They can do so, and patch them, but have to publish their patches. Then the compiler is GPL because it does not make sense to vendor it.


I think you are reacting too strongly. Really, what's the problem?


Thank you for the sanity check. It is difficult for me to articulate really, but I guess I (perhaps in error?) have deferred battles over say proprietary drivers away from programming language licensing (just so that there is no confusion here, I am fairly hard line in terms of the usage of proprietary software for my own personal use; only falling marginally short of the FSF party line). Perhaps seeing this has me fearing that the project’s goals are more ideological than technical (or at least that a core of the community sees it that way?), while I (perhaps naively?) have a more laissez-faire approach believing that superior software is the way to software freedom and that legal “trickery” is a Faustian bargain in the end. I guess the feeling that arises in akin to when I see GNU projects banning even the mention of proprietary software from their communication channels. It has little effect on my excitement and respect for the software itself, but it comes across (to me) as somewhat misguided zealotry.

Sorry for the mess in terms of my writing. It has been months since I first saw the licensing and I still do not know how to internalise it properly.


I think copyleft is a good way of protecting a creator's intentions. While it's true that 'superior software is the way to software freedom', it's also true that free software is the way to software freedom.

If free software begets other free software simply through its replication, then I think the juice is worth the squeeze


Since the language is officially announced now, here's an interesting analysis on the language: https://tilde.team/~kiedtl/blog/hare/

Personally I'm very interesting in playing with it. Generics, functional programming constructs, and a lot of the syntactic sugar you find in modern languages have their uses but I think there's a lot of merit to the idea that you might not want them in a systems language. I think cleaning up some of C's rough edges and providing batteries while keeping things extremely simple and clear is very compelling. I'm also appreciative that it doesn't seem overly opinionated where it doesn't need to be.

It's kind of a bummer that there's no MacOS support so I'll have to SSH into my linux box to play with it. I hope the language takes off and support for other platforms materializes. Congratulations to everyone who worked on this.


> Labeled loops

> I'm not yet sure whether this is a good thing or a bad thing. Sure, you can do clever shenanigans now, but something tells me this will make the code's control flow much harder to follow.

The most popular example I can think of is JavaScript: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Tbh, I think the opposite is true: without the ability to break outer loops, I have resorted to shenanigans using an auxiliary variable and then have to check it after the loop completes, obscuring the logic of the program. Of course, it does not have the full power of goto, and in this case I think it is a sensible choice.


Author here.

Fair enough! Since writing that I've done a lot of Zig code, which has labeled loops, and I can say it's definitely made a lot of code clearer than it would be otherwise.


I agree, labelled loops are fantastic.

I don't know for sure, but I believe the JavaScript feature was inspired by Perl, which has had them since at least the v4 releases (1989-1993):

https://www.rexswain.com/perl4.html#flow



From that article: "The real question is, does Hare have an[y] business existing when Zig and Myrddin already exist?"

Why do people question whether something such as a programming language has business existing? Do they believe in a zero-sum theory of human attention? By "business existing" are they really questioning whether we should pay any attention to it? Serious question, because I find it somewhat disturbing on some philosophical level.


> Since the language is officially announced now, here's an interesting analysis on the language: https://tilde.team/~kiedtl/blog/hare/

Just a reminder (since I can't see a date in the blog post) that this was written almost a year ago. Lots of things have changed in the language since then.


Yes, it is indeed outdated, however some of the criticism/praise is still relevant. I'll see if I can take a second look at the language in the coming days.


Yeah, I didn't mean to imply it's not relevant.


Hare has generics now?!?!

Because to me it looks like the bulk of the criticism remains substantially relevant.


Tagged union type seems a defining feature of Hare to me. It is not exactly "tagged" in the sense that it internally has tags but users don't explicitly see them [1]. This kind of types are normally headaches for language designers (cf. polymorphic variants in OCaml), but the Hare type system is simple enough that it doesn't pose a significant problem. In turn it gives an intuitive model of conversion; you don't have to convert from, say, one type of error to another type of error. Zig error type is also similar in this aspect, but it is not as general as Hare's tagged union type.

[1] I really want to refer this as to "untagged" union type, but of course this will cause a confusion with C-style unsafe unions.


I don't know if the world needs this language, but I'm glad it exists. There's a real charm to C's straightforwardness. It deserves to be expressed without broken array handling and a clunky preprocessor and overzealous UB and so on.

I use Rust and like it well enough but it's amazing that its amount of complexity can work at all. It's healthy to have a counterpoint that stays clear even of generics.


You might want to look into Zig, which has most/all of Hare's features, supports a wider range of platforms and OSes (Hare only does Linux+FreeBSD), and has metaprogramming in the form of comptime expressions (unlike Hare, which has no metaprogramming at all) while being pretty widely considered to be an "easy" language to learn (and everyone I've heard talk about it has said that it's far easier than Rust).


It is impressive the number of times Rust is mentioned in this thread.

I agree. Rust is very complicated and it still feels experimental. It is good to have more conservative languages.


Not supporting Windows and macOS will likely hurt the adoption of the language.

Anyway, is it possible to target bare-metal with Hare? Is it possible to use it without the standard library?


> Not supporting Windows and macOS will likely hurt the adoption of the language.

Not among our target audience it won't.

> Anyway, is it possible to target bare-metal with Hare? Is it possible to use it without the standard library?

Yes. Here are two kernels written in Hare that don't use the stdlib:

https://git.sr.ht/~sircmpwn/helios

https://git.sr.ht/~yerinalexey/carrot


I hope you'll reconsider this in the long term, not because I think Windows/macOS are great (my feelings for them range between "ugh" and abject horror depending on the day).

Rather, I think it would be good because it makes a language much more palatable for mainstream adoption if it's possible for programs written to be ported to Windows/macOS. Languages need the positive adoption feedback loop. Rust would not be as mature and stable as it is today if it were relegated to Linux server programming.

That said - I can appreciate keeping a small target for the language in the beginning. It allows you to concentrate efforts and achieve an MVP much sooner than you would be able to if you had to deal with Windows BS :)


The language is standardized. I encourage you to write an implementation for any platform you wish, or maintain a fork of our compiler/stdlib for your platform of choice. We're not interested in it upstream, but that does not prevent the community from providing for these use-cases.


Honestly that really kills my interest I have in the language. That’s not realistically going to happen (a reimplementation) or go well (maintaining a fork) with a (frankly) niche language.

I suspect the pool of people who develop locally (on Win or Mac, and need something functional if not optimal) and deploy to Linux is not small.

Is there a particular reason? io-uring?


As other commenters alluded to, it's an ideological and practical decision. We simply prefer free software operating systems. We do not care to legitimize nonfree platforms, and we prefer to be able to read (and patch) the code to understand the tools we depend on. If that's a deal-breaker for you, no worries - Hare does not have to appeal to everyone to achieve its goals.


For those not familiar with the concept: this is how moral backbone looks like.


This is not "moral backbone" in the slightest - the Hare developers are not risking anything to take this position, unlike others who may lose their jobs, be imprisoned, or even lose their lives for taking a moral stance.


Many fail on "risking leaving money on the table", long way before "risking loosing their life".

Small steps are steps never the less.


This may lead the design to be inappropriately coupled to ELF, which isn't the best binary format in the world or anything. There are real differences in MachO and PE.


I don't think our design is particularly coupled to ELF. We only really do one thing with it (store the test array in an ELF section), which can be done differently for another platform, and isn't the end of the world if omitted anyway.


The problem here is that focusing on a language can lead to lots of small API design decisions that make it very cumbersome to support other systems later without significant breaking changes.

I can appreciate the ideological and practical purity of that decision, but even Linux aficionados need to deploy code to other operating systems to make a living.


I don't have data to support it but even for Go, a language that does support macOS and Windows, it does not feel like the stdlib is as mature or performant as the Linux version. I don't have numbers for that yet just observations doing simple benchmarks across Linux, macOS and Windows.

Some languages are just clearly designed to be used on the server.


But there's a difference between "it works, but more slowly on <windows or macos> (people's development environments)" and "it doesn't work atoll"


Who are your target audience?


To my thinking, users of open-source operating systems


>> Not supporting Windows and macOS will likely hurt the adoption of the language. > Not among our target audience it won't.

No gamedev?


You can develop games just fine on Linux.


Yes, by targeting wine/proton/win32


that is not what i meant

i don't believe in wine/proton, and i find win32 to be trash

i don't like it, but windows is the platform, if you want to find players for your game

and i believe each platform deserve a native release

i'm not a fan of the mindset of having to lower your standard to please whoever got lazy to not support X, Y or Z

wine/proton promotes the wrong idea, that everyone shouldn't matter about the platform


> Not among our target audience it won't.

Yes it will. I have no interest in using Linux as desktop but I do use it for deployments. If there is one trait of major PL is the adoption of the big 3 OSes.

This is Hare’s biggest flaw right now.


Hard disagree. Linux is by far the largest majority of server deployments, and you are not doing yourself any favors by not developing on the same platform as your deploy target.


I agree 100%, but balanced against the fact that many devs need laptops from their employer, and it's way easier to get a good macbook that just works good enough for development than a linux machine. Especially when you don't control ordering, or need to inherit machines.

Dev on Mac and deploy on Linux is a popular setup for a reason. Until there's a macbook pro for linux that you can just buy*, it's going to continue to be popular.

* the XPS 13/15 are about as close as you can get


The "Macbook Pro for Linux" may turn out to be the Macbook Pro: https://asahilinux.org/

(at least, once they add drivers for the GPU!)


...until Apple releases the next generation that breaks it. It's a bit sisyphean.


If you're developing on a macbook but targeting Linux, you should probably be using VMs


Macs are overpriced garbage. I cringe when I see one.

There are so many good laptops that work fine on Linux.


With cross compilation or docker for development, I'm not clear on why using Linux is a huge boon to developing servers. Having been down the path of Linux-for-all-things, these days I'd much rather have a nice, fast macbook.


I never said it wasn't the biggest deployment target. I also think it's shortsighted to do this.

> you are not doing yourself any favors by not developing on the same platform as your deploy target

Why would I give up all the tooling I already have, such as an VS, to "do development on Linux"? Have you tried doing development from Visual Studio, VSCode on windows where it targets WSL? It's seamless and I don't really have to care much about Linux as a Desktop.


You can do that exact setup just fine with Hare only being able to compile for Linux (or FreeBSD). Providing a Linux runtime environment to run code that targets Linux is WSLs whole purpose.


And here is the entire problem.

> code that targets Linux

This is too big of an assumption to make. For instance, the last company I worked for does contract with DOD where output of the project is two installers for windows an macOS, despite the product working flawlessly on Linux.

The is a HUGE world out there where your deployment targets won't be Linux servers, and this is a huge blind spot for a mainstream want to be language.


Nobody ever said it wanted to be mainstream. It targets a niche, quite blatantly.


No based on what's going on the thread. There is comparison everywhere with mainstream languages.


It is fine to compare it to the mainstream languages, that's what programmers do.


Modern web programming is almost always deployed on Unix-like operating systems / Linux. The biggest reason to allow running Hare on Windows/macOS would just be for developers who don't use Linux.


Correct, which outnumbers the number of developers using Linux only right? Why cut a substantial amount of your possible user base?


Inversely, why not? Why aim for a large user base? That doesn't sound aligned with the ideology behind this language

Keeping the scope niche/small seems like a feature to me, not a bug. Let non-FOSS OS users find something else (?)

I am personally no FOSS zealot but I can respect a project that has a firm idea of its scope and targeted user base and wishes to stay limited to that

Not everything needs to be huge or growth hacked to death


> I have no interest in using Linux as desktop

It sounds like you're not the target audience then?


Support only foss operating system is one of the biggest features right now.


Though I haven't tried it, I understand that some people do development inside a Docker container, so maybe you don't need to run Linux on your desktop?

It seems like a trick that's worth learning.


I'm aware of all these tricks including VSCode devcontainer. It's just a higher celling no matter how you slice it. The solutions with the least amount of resistance tends to win.

Look how easy it's to try these. I don't have to set up VMs or even know what make is to do anything.

- dotnet: https://dotnet.microsoft.com/en-us/download

- rust: https://www.rust-lang.org/learn/get-started

- go: https://go.dev/dl/

If you WANT to be mainstream, you need to go the extra mile.


I think Hare's team DON'T want Hare to be mainstream.


I am wondering: if hare compile for some UNIX systems, then it should be "easy" to port this to MacOS.


I agree that a macOS port should be relatively straightforward.


I really respect Drew and his work/advocacy. I think what most enchants me about Hare is that it's not a "take over the world" language. It's pure FOSS engineering: there's nothing out there that does what I want, so I'll (build a community to help me) build it. Love it.


I contributed to this project for this reason. Hare exists as itself and for a purpose. The libre ethos, the simplicity, the design, with people and the future in mind. The dilligence in having a standard, the commitment to supporting libre platforms, and not bending knees to non-free ones just because they are the norm; e.g. not being afraid to dream. Not adopting whatever hype-train technology bandwagon or corporate interest. Hare stands out as something special.


This is why I will be using it, besides the fact that it's designed to be a modern C replacement that feels like C, but without the bad parts.



lol my other post got flagged, so let me reiterate perhaps in a less inflammatory way.

It is disappointing to see that "trust the programmer" is a design goal. Programmers can not be trusted with manual memory management. We have decades of proof, billions and billions of dollars of bug fixes and mitigation investments, real world damages, etc.

Building a language like this and saying you hope it will be the foundation for new operating systems is... depressing. It's setting us up for another century of industry failure - buggy software that makes users less safe.

It's not to say that memory unsafe languages have no place. Toy programs, or programs not exposed at all, are fine. But that's clearly not the case here - the stated use cases are things like the OS, "networking software", etc. All of the places where C has caused incredible harm.

edit: It would be wrong not to note that Hare does consider memory safety. https://harelang.org/blog/2021-02-09-hare-advances-on-c/

There are clearly wins here, no question in my mind that a world where spatial memory safety is the default is a better world than today. It doesn't change my view overall, however, that for the use cases defined that the bar needs to be higher.

I am also compelled to say something nice about the language. Most apparent is that it looks very approachable - I have to wonder what the '!' means (I can guess), but otherwise it looks very readable. I also like the explicit nature, that's my preference for programs as well as I find it's much more readable.

I think "simplicity" can be a tricky goal, but I like seeing languages call it out as one - I'm very curious to see over the next few decades how "simple" plays out.


I can sort of understand where you're coming from — manual memory management can be difficult, and doing it improperly can cause bugs. However, in my experience, we're very far from having a magical solution for memory management. C++ definitely isn't it, and while Rust does bring significant advances in this field, it's a very large and complicated language. Unfortunately, the memory management strategy of every other language I've tried introduces performance penalties that make it unsuitable for e.g. video games. Trust me, I really wish this weren't the case! :(

Until we have some kind of significantly better solution that solves all memory management problems, I would rather work in a simple language that lets me carefully do everything myself, and if that language is also an improvement over C, I'm happy. However, that's just me, and I can fully appreciate that others are free to choose the tools that are good for them!


I hear you, but I think the problem is that you're framing this as "I, the developer, don't want to accept these costs". And that's fine when the software doesn't leave your system.

The problem is that you're them pushing other costs onto your users ie: exploitable software. So from the developer perspective, great, it works for you, but the cost is there.

I'm sympathetic to not wanting to use the other languages available, I'm not saying that any other language is doing things the "right" way, there's room for a lot of improvement. But I personally think that setting out to build new systems software in a memory unsafe language is setting users up for very serious harm.


I think I understand your view better now. Are you aware of any current memory management strategies (implemented as part of a language or otherwise) that perform well in situations with high performance requirements? For example, as someone who works on video games and real-time audio, most options seem non-starters to me aside from Rust, even if I decided to make sacrifices for the sake of security, and I at least have the impression I've explored this space quite a bit. Anyway, I would be happy to learn more about minimal memory safety strategies that don't require massive scaffolding and also allow for high-performance situations.


Not in mainstream languages. There's a lot of ongoing research in the space. Otherwise, Rust is probably the most mainstream language that achieves your goals.

Games are a bit different imo. While they're often networked they tend to not get attacked the same way as other software for a variety of reasons (though some games become so popular that it becomes worthwhile, like Minecraft). If a language set out to be "safer" (ie: improve temporal safety) but still prioritized performance, and emphasized its use case as being gaming, or explicitly for non-security-sensitive use cases, I'd be a lot more onboard with that. Jai seems to be driving towards that.

My issue with Hare is that it's presented (both on its page and in this HN thread) as being a language for general systems work.


Thank you for all of your feedback, I hope you end up at least trying Hare for the use cases that feel right to you! :)


I appreciate that. I really do hate to be critical of open source work. I just feel this is an important issue.


Not staticassertion, but I'm a hobbyist in real-time audio. I like Rust as a vocabulary for describing/teaching safe programming (&/&mut/Send/Sync). I find that multithreaded programs written in Rust are usually correct while multithreaded programs written in C++ are usually wrong, because Rust encodes the rules of shared-memory threading in its type system (&T: Sync objects are thread-shared, but are either immutable or atomic or requires locking to acquire a &mut T). I also appreciate guiding users towards exclusive references (&mut) to make it easier to reason about code. However I find it makes it too difficult to mutate through shared references or write unsafe code (passing Stacked Borrows while lending out &mut is more like solving puzzles than writing imperative code, and writing code that never touches & and &mut is a quagmire of addr_of_mut!() and unsafe blocks on every pointer dereference), and the Rust community appears uninterested in making unsafe programming more ergonomic.

Personally I'm a fan of C++'s unique_ptr/& as an unsafe escape hatch from Rust's single ownership or runtime refcounting overhead. It's at least as safe as Rust's unsafe pointers, and far more pleasant to use. Qt's QObject ownership system is reasonably ergonomic and QPointer is fun (though dangerous since it can turn into null unexpectedly), but Qt uses it pervasively (rather than only when safe memory management fails), relies on prose documentation to describe which pointers transfer ownership or not (resulting in memory management bugs), and QObject child destruction and nulling-out QPointers relies on runtime overhead. I haven't tried ECS or generational indexes yet, but those are popular in games, and Vale has its own ideas in this field (https://verdagon.dev/blog/generational-references).

On an aesthetic/principled level, I'd rather punt alias analysis to the programmer (pointer/restrict or &/&mut) rather than compiler complexity/magic (TBAA and provenance checking). Glancing at https://harelang.org/specification/, it seems Hare lacks an equivalent of restrict/&mut, and I wonder if that prevents the compiler from ever adding support for removing redundant loads/stores through pointers.


> On an aesthetic/principled level, I'd rather punt alias analysis to the programmer (pointer/restrict or &/&mut) rather than compiler complexity/magic (TBAA and provenance checking).

That would certainly be nice, but the state of the art on what problems even are is far ahead in optimizing compilers than anyone else - "having a restrict keyword" doesn't solve every aliasing problem afaik, and nobody respects the performance people when they tell you undefined behavior in C is actually useful. So nobody has come up with a simple solution for a better language that solves problems like pointer provenance and yet is "faster than C".

Actually most people's ideas of how to make programs faster are complicated things like autovectorization that don't work and would make it slower.


> However I find it makes it too difficult to mutate through shared references

It's not that difficult, you just need to use UnsafeCell<…> or one of its safe derivatives (each of which has some potential runtime overhead) to keep the semantics tractable.


One of the strange things about Rust is the &UnsafeCell<T>/*mut T dichotomy. &UnsafeCell<T> is easier to work with, and you can soundly acquire &mut T as long as they never overlap, but you can't turn a Box<UnsafeCell<T>> into a &UnsafeCell<T> and back to a Box<UnsafeCell<T>> to delete it, because provenance or something.

*mut T is harder to work with, this is UB according to miri since you didn't specify `&mut x as *mut i32 as *const i32`:

    let mut x = 1;
    let px = &mut x as *const i32;
    unsafe {
        *(px as *mut i32) = 2;
    }
Problem is, most APIs won't give you a &UnsafeCell<T> but rather a &mut T. Not sure if you can convert a &mut T to a &UnsafeCell<T> (you definitely can't using `as`). If you want to create multiple aliasing pointers into a non-UnsafeCell type or struct field, one approach (basically a placeholder since &raw isn't stable, https://gankra.github.io/blah/fix-rust-pointers/#offsets-and...) is:

    let mut x = 1;
    let px = addr_of_mut!(x);
    unsafe {
        *px = 2;
    }


Cell::from_mut converts from `&mut T` to `&Cell<T>` and that's a newtype around UnsafeCell - it has the same `as_ptr` method.


You cannot turn a &T into a Box<T>, because &T borrows T, while Box<T> owns T, and moreover it holds it in a separate allocation, so even &mut T cannot be transformed into Box<T> --- it already lives in some allocated space and whatever there is a reference to, cannot be moved to a new allocation. For moving T you need T, not a reference to T. The case with UnsafeCell<T> substituted in place of T is just a special case.

UnsafeCell<T> also owns T, so transforming &mut T into UnsafeCell<T> also doesn't make sense. The unsafe equivalent of references is pointers.


C++ lets you easily delete a T* or T const*, Rust has https://doc.rust-lang.org/std/primitive.pointer.html#2-consu... I guess?

> UnsafeCell<T> also owns T, so transforming &mut T into UnsafeCell<T> also doesn't make sense.

I wanted to transform a &mut T into &UnsafeCell<T> (note the &) and copy the reference, to allow shared mutation scoped within the lifetime of the source &mut T. How can this be accomplished?


> C++ lets you easily delete a T* or T const*, Rust has https://doc.rust-lang.org/std/primitive.pointer.html#2-consu... I guess?

Box deletes the owned object when it goes out of scope without being moved (like unique_ptr in C++). So if anything, you want to go the other way: https://doc.rust-lang.org/std/boxed/struct.Box.html#method.f...

However, you can delete a *T by using <https://doc.rust-lang.org/std/alloc/trait.Allocator.html#tym...> with the Global allocator (since this is the one you're most likely using).

> I wanted to transform a &mut T into &UnsafeCell<T> (note the &) and copy the reference, to allow shared mutation scoped within the lifetime of the source &mut T. How can this be accomplished?

If you want to have two instances of one &mut T, you don't go through &UnsafeCell<T>. Instead you may cast &mut T into *mut T and then use this: <https://doc.rust-lang.org/std/primitive.pointer.html#method....>. This however will cast into any lifetime, so if you want to bind the two lifetimes together, then you need to have the lifetime of the original &mut T explicitly specified, and then you assign the result of the method I linked to a variable with explicitly specified type where you specify the lifetime annotation. Alternatively, you may write a separate function which accepts both references as arguments and binds the two lifetimes together the usual way.

I admit it's a bit unergonomic. The best way currently would be to have the data stored as UnsafeCell in the first place and then call get_mut() on it to get all the references. However, if this reference comes from outside, you're left with the little mess I outlined above.


These are different things. UnsafeCell<T> is for shared mutable data. *mut T is for data that you assert will never be mutated while it's being shared with some other reference, but you can't rely on the compiler to prove this fact for you.


If I have a &mut T, what pointer type do I convert it into (and create multiple copies of), to allow shared mutation scoped within the lifetime of the source &mut T?


You can have multiple *mut T aliasing to the same data, but if you mutate through them you're on your own wrt. Rust's safety rules.


Thank you for the resources, I'll look into these details in more depth!


I’m sure it is not the answer you want to hear, but partial use of GCs seems to be exactly that. Modern GCs have insanely good throughput OR latency.

Quite a few languages have value types now, with that you can restrict your usage to stack allocations for the critical hot loops, while low-latency GCs promise less pauses than the OS itself, which should be plenty good for even the most demanding games.


Hey, I'm open to any answer that helps me write better programs. :) Which languages do you have experience working with in high-performance situations? I, for one, had high hopes for using Go for video game development, but it turns out that even in highly-tuned Go code with the latest GC optimisations, there are still significant GC pauses that cannot be overcome [0]. However, perhaps you're referring to other types of GCs I'm not aware of?

[0]: https://ebiten.org/blog/native_compiling_for_nintendo_switch...


I don’t have much experience with C#, but currently that seems to have the best balance of control over allocations and a performant GC due to having value types (and pointers as well if I’m not mistaken?)

But regarding GC, Java is unquestionably the king in that aspect, throughput-wise G1 is unbeatable and its relatively new ZGC might be of interest to use. It is the one I thought about previously, it currently promises sub-millisecond max pause times and this pause time doesn’t grow with heap size. Unfortunately Java doesn’t have value types yet, so you either write your hot loops with only primitives and allocations you make sure gets optimized by the escape-analyser, or do some manual memory management with the new Panama APIs, which are quite friendly in my opinion.

EDIT: Just read your link, while Java can be AOT-compiled with GraalVM, only the enterprise version supports some of the more exotic GC-variants (though not sure about ZGC). It should be free for personal use, but do have a look at it. Though what I wrote concern mostly running code with the JVM.


Yep, worth noting that there are a number of actual games that use MonoGame / FNA, including low-latency platformers like Celeste. I've actually found games written in these engines to be among the best performing games all around on old hardware.


Java's ZGC as of jdk 17 has very low pause times (e.g. a p99 of roughly 0.1 ms in this benchmark[0]). Their stated goal is to remain sub 1 ms, but in practice it stays well below that.

The JVM isn't the most common game dev platform, but I have been enjoying using LibGDX with Scala on jdk 17 with ZGC.

[0] https://kstefanj.github.io/2021/11/24/gc-progress-8-17.html


I use pony https://ponylang.io/ as a language - it's an Actor based language with GC where every actor has its own memory and is responsible for its own GC.

The main feature is its ability to safely share or move data around between actors in a way that is data-race and deadlock free. Pony doesn't use locks anyways :-)

A high level as to how it achieves this:

i. All variables have a "reference capability" - which is a description of what you can and cannot do with the variable alias (pointer) you have.

ii. The references to the data (think pointer) are passed in messages.

iii. As references are made and removed, actors send messages to the originating actor updating the count of references. When the count reaches zero, that data can be GC'd.

It's nice, because unlike some other language runtimes, it doesn't have to stop the world to work out what can and can't be GC'd. It's all done on-the-fly as it were.


Hi, this article says that I could overcome GC pauses. Thanks.


> while Rust does bring significant advances in this field, it's a very large and complicated language.

The gist of Rust on this fairly easy, is the heritage/chasing of C++ that makes Rust complicated.

A "Rust simple like pascal/c" have potential and I bet 1 billon the borrow checker will not make it hard to use (check https://vale.dev)


> A "Rust simple like pascal/c"

This has been tried, see Cyclone. It was a lot less practically usable than modern Rust.


> C++ definitely isn't it, and while Rust does bring significant advances in this field, it's a very large and complicated language.

It really isn't. Not compared to C++, at least. Or to managed language runtimes, which are just as "large and complicated", only beneath the hood.


Putting things "below the hood" with as few leaks as possible is one of the key ways of managing complexity. So if a language can do this for a certain set of use cases then it's worth using for those use cases. Everything becomes quicker and more productive. There's a reason people that few people nowadays write the server side of web applications in C++. Rust isn't a huge improvement for that use case compared to a managed language.


Rust is definitely as complicated as C++. However its complexity isn't as big of a deal because it's so much safer. If you forget one of the extremely complicated Rust rules you'll get a compile error. If you forget one of the extremely complicated C++ rules you hopefully will get a compile error. Maybe a warning with `-Wall` or maybe you'll silently get memory errors at runtime!


Rust is a complicated language, but I don’t it reaches C++ levels of complexity. One of the pernicious aspects of “mastering” C++ is understanding all of its leaky abstractions; there’s nothing like SIOF or SFINAE in Rust.


It definitely does. I think a lot of people think it doesn't because a) most people who know both have far more experience with C++ and are yet to experience its really complicated bits yet, and b) C++ has an actual specification so you can read about all its complexity.

Rust may not have SFINAE but C++ doesn't have for<'a>, Phantom data or Pin.


> Rust may not have SFINAE but C++ doesn't have for<'a>, Phantom data or Pin.

I'll grant you PhantomData, but I'd argue with the other two. C++ does have lifetimes and pinning semantics, they're just implicit and (largely) taken for granted. That is, until you cause memory unsafety with either.

IMO, the overall pattern between C++ and Rust is that "advanced" use requires many of the same skills between the two, but that (1) Rust is much better about avoiding "advanced" use, and (2) Rust forces the user to be much more explicit about what they actually mean (cf. lifetimes and pinning). These are arguably more complex than what C++ does, but only in the sense that C++ amortizes that complexity in blood.


> C++ does have lifetimes and pinning semantics, they're just implicit and (largely) taken for granted.

C++ does have pinning, but it is much rarer than in Rust because of user-defined move constructors.


It's certainly much more complicated than Go, and not just on the surface


But they are nowhere near the same niche. Go is much much closer to JS than to Rust by design, it just mimics being lower-level.

System level programming almost by definition requires quite a bit of complexity, and you can’t hide it no matter how elegant your abstraction is. Essential complexity is non-reduceable:


I would assert that writing compilers, linkers, hypervisors, unikernels, GPU debuggers, cloud infrastructure, is system level programming.


Well, that's because Go, as per design, is quite limited. Its type system is lacking things Rust has. The languages have (had?) different goals. You can see one past example in generics. How long it took to finally drag the Go developers to implement them, coming to recognize their usefulness, instead of sticking to the rather limiting "No we want the language to be very simple so that everyone can understand and use it." attitude. Rust has been designed with that safety aspect as one of its primary goals and that will incur some cost in being less simple.


Maybe be less zealous and aware of your assumptions?

Your assumption is that memory safety has to be baked in the language. It could be baked into proof assistants that are part of (optional or add-on) tooling, like what sel4 does. A simpler language makes this more possible, and the things that a proof assistants can do go far beyond what rust is able to provide, without sacrificing compilation speed or other forms of optimality (e.g. avoiding the heap) when you don't want or need such a high level of security guarantee (e.g. writing a cli tool that never sees the internet)

As for me, I'm terrified that rusts complex macro system will hide/obfuscate discovery of other forms of security regression, like timing or energy side channels.


My assumptions are based on study and experience. My "Zealotry" is just a desire to reduce harm to users in an area that I personally believe we should strive not to regress on.

I'm not interested in discussing Rust. Frankly, I'm sure there will be plenty of other people already doing so.

What's clear from this thread is that Hare does attempt to move the needle, relative to C, with regards to safety. My opinion is that that's not enough for the use cases they're targeting, but I suppose it's really up to whoever's writing the software to decide that.


> A simpler language makes this more possible

This is false. Rust’s borrow checker is nothing else but an included proof assistant for rust code. The reason it can catch so many memory issues and data races is specifically due to a more restricted language. Also, sel4 is a relatively tiny program which was written for an unusually long time by domain experts. Formal verification simply doesn’t scale to global properties, that’s why some restrictions are useful as they allow local reasonings instead.

For a more hands-on example look at the quality of auto-complete in case of Intellij’s Java vs a dynamically typed language. This night and day difference in quality is yet again possible due to what the language can’t denote.

Re rust macros: I don’t get your point, AFAIK they simply expand to regular old Rust code and then gets compiled, so the exact same safety guarantees apply.


> Formal verification simply doesn’t scale to global properties

This is an assertion you are making with absolutely no evidence, and also totally self-contradictory with your statement "Rust’s borrow checker is nothing else but an included proof assistant for rust code".

While we're at it, also Ada does this, which has long been used for large scale mission critical applications where formal assurances are necessary (with even more available optional safeties than rust provides).


I meant to write every global property due to Rice’s theorem.

And I don’t believe my claim is unsupported, the largest formally verified program is the mentioned sel4, which is still tiny compared to even the smallest of business apps and was written by domain experts over multiple years.

Restricting a problem to a subset is like the numero uno step to solve any hard problem - and this is what rust basically mandate. It won’t provide bug-free programs, but it can reliably prove the absence of memory bugs and data races due to the borrow checker, which can do its work on function-scope, since all the relevant information is encoded in the function’s generic lifetime arguments.


These claims aren’t contradictory when you understand the domain in question: formal verification of C abstract semantics doesn’t scale particularly well. Rust (and Ada) both have restricted memory semantics (particularly around aliasing) that effectively eliminate some of the hardest problems in whole-program static analysis.


Formal verification doesn't scale to global properties in the general case. Global properties that are simple and type-like (in that they match the syntactic structure of the program in a fully "compositional" way, like Rust lifetimes) can be checked with comparable ease. Complex properties can often be checked within a single, self-contained, small-enough program module. Trying to do both at the same time - check complex properties globally - is highly problematic. That's why the Rust borrow checker has to make simplifying assumptions, and use 'unsafe' as an escape hatch.


I'm not really convinced that this is true. I think you're brushing up against Rice's theorem, which is that proving arbitrary properties about arbitrary programs is equivalent to the halting problem. That's why we constrain languages with type systems, which limits any typed language from expressing arbitrary turing complete programs.

Proof assistance is sort of irrelevant. Types and proofs are related, as denoted by the curry howard correspondance.

The real issue with "throwawaymaths"' point is that they're saying "use proof assistants" to people who are using proof assistants. SEL4 is a terrible example of a success story, as it took ages to complete, and then there was immediately a bug found in a class they weren't looking for - because rice's theorem.

They're clearly advocating for the use of specific and explicit proof assistants, which is fine and a totally reasonable thing to advocate for, but in no way is related to rust or the discussion, which is why I chose not to engage.


> > discovery of other forms of security regression, like timing or energy side channels.

> so the exact same safety guarantees apply.

I see this a lot with Rust, and with encryption. There are no magic bullets. There is no "you are safe because you used this" tool.

In this case, Rust's safety gaurantees apply only to memory safety.

GP wasn't talking about memory safety above.


Sure, and I do share your concern regarding over ambitious claims on rust’s safety benefits, but I just don’t get why would a macro hide these any more than let’s say another function call would.

I don’t mean to say that a macro can’t get needlessly complex, but the same is true of functions that are inherent in basically any language. In the worst case macros can be expanded and looked at in their full forms. They are as always abstractions, which are pretty much necessary, but they can be abused as well.


> It could be baked into proof assistants that are part of (optional or add-on) tooling, like what sel4 does. A simpler language makes this more possible, and the things that a proof assistants can do go far beyond what rust is able to provide

I’m actually doing my PhD on the verification of Rust programs and wanted to add that the opposite is actually true. The type system of Rust helps to create a much simpler model of the language which allows us to do proofs in much larger scale than with C (for the same effort). This is specifically because of how ownership typing helps us simplify reasoning.


"Optional" means unused or misused until proved otherwise. No checking and no guarantees can be assumed, and if someone tries to deploy an add-on proof assistant I'd expect managers to see programmers who waste time pursuing some warnings instead of making progress on new features.


This is wholly unimaginative, there is a wide window of usage patterns that are not "unused or misused", for example, on by default but off with a project flag, or off when building a dev release but on when building a prod release, and also don't forget my point that many projects simply don't need the level of memory safety that rust provides. For example if you are single threaded and never free, or if you have an arena strategy.


Reducing concurrency and/or dynamic memory management makes a program easier to reason about for the compiler (and more likely to be correct in practice), not less in need of correct memory management.

I'm "wholly unimaginative" about what variables can be acceptably corrupted; I can only think of deliberately reading uninitialized memory as a randomness source, a case that is easier to prevent (by clearing allocated memory by default on the OS side) than to enable.


There are plans to research an optional borrow checker for Hare. Hare also does offer many "safety" advantages over C: checked slice and array access, exhaustive switch and match, nullable pointer types, less undefined behavior, no strict pointer aliasing, fewer aggressive optimizations, and so on. Hare code is much less likely to have these errors when compared to C code.

I would ultimately just come out and say that we have to agree to disagree. I think that there is still plenty of room for a language that trusts the programmer. If you feel differently, I encourage you to invest in "safer" languages like Rust - but the argument that we're morally in the wrong to prefer another approach is not really appreciated.


I think a section on safety might be worthwhile. For example, Zig pretty clearly states that it wants to focus on spatial memory safety, which it sounds like Hare is going for as well.

That's certainly an improvement and worth noting, although it obviously leaves temporal safety on the table.

> but the argument that we're morally in the wrong to prefer another approach is not really appreciated.

Well, sorry to hear it's not appreciated, but... I think developers should feel a lot more responsibility in this area. So many people have been harmed by these issues.


> I think developers should feel a lot more responsibility in this area.

I think most programmers would agree with that sentiment. Getting everyone to agree on what is "responsible" and what isn't however...

Hare is a manifestation of the belief that in order to develop responsibly, one has to keep their software, and their code, simple.

An example of what I mean by this: An important feature of Rust is the use of complex compiler features in order to facilitate development of multithreaded programs and ensure temporal safety. In Hare programmers are encouraged to keep their software single threaded, because despite features like Rust's, concurrent programs turn out much more complex to write and maintain than sequential ones.

Keeping software single-threaded also eliminates many ways in which a program could fail due to lack of compiler enforced temporal safety.


Single threaded development seems a noteworthy goal, and I partially agree that it often leads to much simpler code and works well in systems like Erlang. But it is also a questionable focus in the days of barely increasing single core performance, especially in a systems language.

I believe one of the reasons Rust got so popular is that it made concurrency much easier and safer right at a time where the need for it increased significantly.

If that is the recommendation, maybe the standard library could focus on easily spawning and coordinating multiple processes instead, with very easy to use process communication.


Unfortunately you can't make things faster by making them concurrent, at least not in the way computers are currently designed. (And they're probably designed near-optimally unless we get memristors.) In my experience it's the opposite; you can make concurrent programs faster by removing it, because it adds overhead and they tend to wait on nothing or have unnecessary thread hops. And it makes them more power efficient too, which is often more important than being "faster".

Instead you want to respect the CPU under you and the way its caching and OoO instruction decoding work.


It sounds like Hare has a philosophy around safety here and it's just not documented - at least not where I could find it, scrolling around a bit.

I did find this page, though: https://harelang.org/blog/2021-02-09-hare-advances-on-c/

Which I found very interesting.


That’s just not realistic. We live in a complex world, and we need complex software. That’s why I vehemently hate these stupid lists: http://harmful.cat-v.org/software/

Don’t get me wrong, it is absolutely not directed at you, and I absolutely agree that we should strive for the simplest solution that covers the given problem, but don’t forget that essential complexity can’t be reduced. The only “weapon” we have against it is good abstractions. Sure, some very safety critical part can and perhaps should be written in a single-threaded way, but it would be wrong to not use the user’s hardware to the best of its capability in most cases, imo.


In a world where 99% of software is still written to run on 4-16 core machines and does tasks that any machine from the last 10 years can easily run on a single thread if it was just designed more simple instead of wasting tons of resources...

I'd wager that most of the applications that need to be "complex" in fact only haven't sorted out how to process their payloads in an organized way. If most of your code has to think about concurrent memory accesses, something is likely wrong. (There may be exceptions, like traditional OS kernels).

As hardware gets more multithreaded beyond those 16 core machines, you'll have to be more careful than ever to avoid being "complex": when you appreciate what's happening at the hardware level, you'll start seeing that concurrent memory access (across cores or NUMA zones) is something to be avoided except at very central locations.

> essential complexity can’t be reduced

I suggest looking at accidental complexity first. We should make it explicit instead of using languages and tools that increasingly hide it. While languages have evolved ever more complicated type systems to be (supposedly) safer, the perceived complexity in code written in these languages isn't necessarily connected to hardware-level complexity. Many language constructs (e.g. RAII, async ...) strongly favour less organized, worse code just because they make it quicker to write. Possibly that includes checkers (like Rust's?) because even though they can be used as a measure of "real" complexity, they can also be used to guardrail a safe path to the worst possible complex solution.


> I suggest looking at accidental complexity first. We should make it explicit instead of using languages and tools that increasingly hide it.

The languages that hide accidental complexity to the greatest extent are very high level, dynamic, "managed" languages, often with a scripting-like, REPL-focused workflow. Rust is not really one of those. It's perfectly possible to write Rust code that's just as simple as old-school FORTRAN or C, if the problem being solved is conducive to that approach.


I don't see it as an improvement, because they are taking what Modula-2 already offered in 1978 in terms of systems programing safety.

I guess not having to type keywords in caps (because who uses IDE/smart editors) and having a C like syntax is the selling point.


I don't really want to engage with the RESF. We have the level of safety that we feel is appropriate. Believe me, we do feel responsible for quality, working code: but we take responsibility for it personally, as programmers, and culturally, as a community, and let the language help us: not mandate us.

Give us some time to see how Hare actually performs in the wild before making your judgements, okay?


I'm a security professional, and I'm speaking as a security professional, not as an evangelist for any language's approach.

> Give us some time to see how Hare actually performs in the wild before making your judgements, okay?

I'm certainly very curious to see how the approach plays out, but only intellectually so. As a security professional I already strongly suspect that improvements in spatial safety won't be sufficient to change the types of threats a user faces. I could justify this point, but I'd rather hand wave from an authority position since I suspect there's no desire for that.

But we obviously disagree and I'm not expecting to change your mind. I just wanted to comment publicly that I hope we developers will form a culture where we think about the safety of users first and foremost and, as a community, prioritize that over our own preferences with regards to our programming experience.


I am not a security maximalist: I will not pursue it at the expense of everything else. There is a trend among security professionals, as it were, to place anything on the chopping block in the name of security. I find this is often counter-productive, since the #1 way to improve security is to reduce complexity, which many approaches (e.g. Rust) fail at. Security is one factor which Hare balances with the rest, and I refuse to accept a doom-and-gloom the-cancer-which-is-killing-software perspective on this approach.


You can paint me as an overdramatic security person all you like, but it's really quite the opposite. I'd just like developers to think more about reducing harm to users.

> to place anything on the chopping block in the name of security.

Straw man argument. I absolutely am not a "security maximalist", nor am I unwilling to make tradeoffs - any competent security professional makes them all the time.

> the #1 way to improve security is to reduce complexity

Not really, no. Even if "complexity" were a defined term I don't think you'd be able to support this. Python's pickle makes things really simple - you just dump an object out, and you can load it up again later. Would you call that secure? It's a rhetorical question, to be clear, I'm not interested in debate on this.

> I refuse to accept a doom-and-gloom the-cancer-which-is-killing-software perspective on this approach

OK. I commented publicly that I believe developers should care more about harm to users. You can do with that what you like.

Let's end it here? I don't think we're going to agree on much.


> There is a trend among security professionals, as it were, to place anything on the chopping block in the name of security.

I really have to disagree on this, in spite of not being a security professional, because the history has proven that even a single byte of unexpected write---either via buffer overflow or dangling pointer---can be disastrous. Honestly I'm not very interested in other aspects of memory safety, it would be even okay that such unexpected write reliably crashes the process or equivalent. But that single aspect of memory safety is very much crucial and disavowing it is not a good response.

> [...] the #1 way to improve security is to reduce complexity, [...]

I should also note that many seemingly simple approaches are complex in other ways. Reducing apparent complexity may or may not reduce latent complexity.


History has also proven that every little oversight in a Wordpress module can lead to an exploit. Or in a Java Logger. Or in a shell script.

And while maybe a Wordpress bug could "only" lead to a user password database leaked but not the complete system compromised, there is a valid question which is actually worse from case to case.

Point is just that from a different angle, things are maybe not so clear.

Software written in memory unsafe languages is among the most used on the planet, and could in many cases not realistically replaced by safer languages today. It could also be the case that while bug-per-line might be higher with unsafe languages, the bang-for-buck (useful functionality per line) is often higher as well (I seriously think it could be true).


Two out of your three examples are independent to programming languages. Wordpress vulnerability is dominated by XSS and SQL injection both of which are natural issues arising from the boundary of multiple systems. Java logger vulnerability is mostly about the unjustified flexibility. These bugs can occur in virtually any other language. Solutions to them generally increases the complexity and Hare doesn't seem to significantly improve on them over C probably for that cause.

By comparison memory safety bugs and shell script bugs mostly occur in specific classes of languages. It is therefore natural to ask for new languages in these classes to pay more attention to eliminate those sort of bugs. And it is, while not satisfactory, okay to answer in negative while acknowledging those concerns---Hare is not my language after all. Drew didn't, and I took a great care to say just "not a good response" instead of something stronger for the reason.


> #1 way to improve security is to reduce complexity

If managing memory lifetime is an inherently complex problem (which it is), the complexity has to live somewhere.

That somewhere is either in the facilities the language provides, or in user code and manual validation.


Imho as everywhere in this field there are tradeoffs to choose for improving this problem: Complexity (rust, formal proofs), runtime overhead (GC), etc.

Hare tries to be simple, so that it's easier to reason about the code and hence maybe find/avoid such bugs more easily.


For the last 50 years, from personal computer devices to nuclear power plants, the world is sitting on a pile of C code.

Why are you afraid of manual memory management?

The world is just doing fine with it, because, believe me or not, the world's worst problems like violence, war, poverty, famine, just to name a few, are not caused by C bugs.


How would I trust the programers who implement automatic memory management that they are not going to make any mistake?


"How would I trust the programmers who implement compilers that they are not goin to make any mistakes? I'll write the assembly myself, thank you very much."

It's much better to push logic from being manually re-written over and over by tens of thousands of different programmers, to being built into a language/tool/library by a team of a few hundred experts and then robustly tested.


> "How would I trust the programmers who implement compilers that they are not goin to make any mistakes? I'll write the assembly myself, thank you very much."

Nope, doesn't work. Then you have to trust that the programmer who wrote the assembler didn't make any mistakes. Real programmers program in octal.


> Then you have to trust that the programmer who wrote the assembler didn't make any mistakes.

This is true, but the actual solution is safer assembly. ARM is heading in this direction with PAC/BTI/CHERI. Intel, being Intel, tried adding some and it was so broken they removed it again.

It could go much further.


While any modern operating system is the living counter point, so far it's manageable.


They aren't a counterpoint at all. They're confirmation. Security-wise legacy operating systems (Linux, NT, ...) suck. New security vulnerabilities are discovered every week and month in them to the point that nobody actually considers these "multi-user systems" any more and obviously every box hooked up to the internet better be getting patches really frequently.


For the record I said manageable not great or perfect.


Every (popular) modern operating system sits on decades old foundations written in C that can't just be replaced, so that's not a particularly strong argument.

It's noteworthy that Google is financing the effort to bring Rust to the Linux kernel, that Microsoft is also investing in the language and that there are newer, production usage focused operating systems written in Rust. (eg Hubris [1])

[1] https://github.com/oxidecomputer/hubris


Redox is probably a better example than Hubris:

https://www.redox-os.org/

Hubris is intended to run on microcontrollers in a very low-level context (e.g. no display), so is very unlikely become a desktop / user-facing OS.

(I work at Oxide, mostly writing Hubris code)


I agree. It is quite clear that it is impossible to write large code bases safely with manual memory management. Even very small programs often have massive problems. I think many programmers are simply in denial about this.


I see Rust as a counterexample, serving as a formalization of provably safe patterns of manual memory management. I do wish it made it easier to write human-checked code the compiler cannot verify; unsafe code is so painful with unnecessary pitfalls, that many people write either wrong-but-ergonomic unsafe code (https://github.com/mcoblenz/Bronze/, https://github.com/emu-rs/snes-apu) or add runtime overhead (Rc and RefCell).


[flagged]


While Drew is the designer of Hare, a lot of us worked on Hare, and we tried really hard to create something useful and valuable. I think it would be a shame if you disregard it because of something that Drew said at whatever point in time. If you have the time, please try Hare and let us know what you think!

Aside from that, the question of memory safety is more complex than you make it out to be, and Drew, myself and others have discussed it in a lot of detail in this thread, Drew mentioned there are many memory safety features, potential plans for an optional borrow checker and so on — please draw your conclusions based on the full information.


I am sympathetic, but I agree with others that any new systems language in 2022 must have memory and thread safety with minimal escape hatches as its utmost priority and a core component of the language design.

Otherwise, what's the point? Yet another language that is a bit more convenient than the alternatives but doesn't do much to help with all the vulnerabilities and bugs in our software? We already have quite a few languages like that: Zig, Nim and D to name a few. (for both Nim and D GC is optional)

Rust is by no means the ultima ratio. It's complex and places a lot of immediate burden on the developer. I'm sure there are better solutions out there that don't sacrifice the safety guarantess. Particularly because Rust not only has the borrow checker but also a general focus on very type-system heavy designs.

But it has proven to be a significant step up from the likes of C and C++, and the additional burden pays off immensely in maintainability and correctness. I can count the memory/thread unsafety issues I encountered in 5 years of Rust on one hand, and in each case the culprit was a C library. (either because of a bad wrapper, or inherent incorrectness in the library)

Memory and thread safety can't be retrofitted into a language without producing a mess, as can be seen by the discussions and efforts in D, C++ and to some extent Swift.

They need to be central to the core language and standard library.


> Rust is by no means the ultima ratio. It's complex and places a lot of immediate burden on the developer. I'm sure there are better solutions out there that don't sacrifice the safety guarantess.

There are undoubtedly better solutions than Rust, but they tend to allow for more developer-managed complexity rather than less! For example, one might imagine a cleaned-up variety of the GhostCell pattern, to decouple ownership- and borrow-checking from the handling of references. The principled extreme would be a language that just implements full-blown separation logic for you and lets you build up the most common patterns (including "ownership" itself) starting from that remarkably "simple" base.


I don't think that's really the case. I can't think of any new safe GC-free languages that are more "manual" than Rust, but there are at least a couple that use reference counting with compile time optimisations to make it not really slow. Koka looks the most interesting to me at the moment.


Making the Rust ownership model central to the core language and standard library has meant that after 11 years of Rust, your program still can't have two objects of the same type owned by different allocators. As a result, I am interested in other approaches to these problems.


Local allocators are being standardized. This has nothing to do with the Rust ownership model, C++ took a long time to standardize an allocator abstraction too.



> Otherwise, what's the point?

Hare seems to solve spatial memory safety and null safety, which I think is a big deal.


Better than C is certainly a step up, but in my opinion also solving temporal safety must be a goal for a new language in this space.


This is a very unsympathetic take. If you expand to the full context, you'll note that I weigh memory safety against other trade-offs, and come away with a different answer than Rust does.

Hare does have safety features. Checked array and slice access, mandatory error handling, switch/match exhaustivity, nullable pointer types, mandatory initializers, no undefined behavior, and so on. It is much safer than C. It just doesn't have a borrow checker.


> This summarizes everything one needs to know

Your comment comes off as very dishonest. How does this quote summarize "everything I need to know". The quote you show appears to be out of context, I followed your link and that it appears to be a conclusion following a number of points explaining why, for the author, Rust's safety feature is apparently not enough to counterbalance it's flaws.

I don't fully agree with the points there (I enjoy Rust), but what does criticizing Rust have to do with "not understanding memory safety"?

Can you elaborate on why somebody has to appreciate Rust? Because I don't see it. Rust is not a religion and we shouldn't treat it as such.


I don't think it's fair to boil the language down to one person, or a person's views down to one statement.


Is appreciating Rust some kind of high ideal in the world of programming?


Yes, Rust is quite posh nowadays.

Just like Scala and F# a decade ago.

Or like ruby 15 years ago, or like Java in the 90s.


How is that statement an indictment against him? That’s his opinion. Does it make you a bad person to eschew Rust?


> Programmers can not be trusted with manual memory management. We have decades of proof, billions and billions of dollars of bug fixes and mitigation investments, real world damages, etc.

We have sanitizers if you are a bad programmer, use that if you don't trust yourself


It’s just ridiculous to call someone a bad programmer over memory bugs.

By your own words you are definitely a bad programmer if you have ever written more than 1000 lines of code in low level programming language, because there is simply no way you haven’t made an error. You just don’t necessarily know about it, which in my opinion makes you a worse developer, especially with this ancient and destructive mindset.


There is nothing wrong with being a bad developer, we are all bad developers if we don't understand what we are doing

We have the tools to eliminate memory bugs already

Forcing people to use a language with a buitin sanitizer/babysitter that runs everytime you compile your code and as a result makes you wait 10+ minutes between each line of change is dumb

Expecting your code to be bug free because the code written by someone else told you so is also dumb

https://github.com/rust-lang/rust/issues/38899

who to trust in that case?

--

educate and trust developers, use tools to audit your code when needed


Sanitizers can only show you some memory problems that happen during a given runtime with given data.

And your 10+ minutes baseless assumption is just demonstratively false. In the default debug mode it is literally instant with human perception on my current, not small project. Longer compiles only happen in release mode or when you introduce new dependencies.

Shall I link you to Valgrind bugs as well?


> And your 10+ minutes baseless assumption is just demonstratively false

Ok, not 10+ minutes, more like 6 minutes on a beefy machine:

https://www.youtube.com/watch?v=nR2WDBMjkh8&t=976s

> Shall I link you to Valgrind bugs as well?

See, you fail to understand the point


As was already stated, sanitizers catch a subset of issues and rely on test coverage. Sanitizers have been in use by projects for years, with lots of money spent on fuzzing and test coverage, and there are still numerous issues.


Guess is not the language for me. I am not going to switch to a Linux Desktop to be able to use a general purpose programming language.


Same boat.


The first thing I look for in something that claims to be a systems language is whether it provides anything that helps me out.

First and foremost, does it provide any primitive that can be used for automation? C++ brought destructors, Rust did a Drop trait. What do you have?

Second, does it provide any way to operate on types? C++ got templates, Rust offers generics. What do you have?

Third, does it provide useful compile-time logic?

If it lacks those, what does it bring to the table to make up for those gaping holes? This is not the '70s, or even the '80s. We have learned things since then. Have you?


> We have learned things since then. Have you?

We have learned to read the documentation before we judge things.


I have learned to check on some basic facts before committing a huge block of time to another pointless exercise. The world seems mostly to have learned to serve up way more pointless exercises.

One choice is to just ignore everything, which almost always produces the right answer. But that will miss the thing that would have been worth looking into. Another choice is to try ways to filter out chaff. What is not a possible choice is to dig into everything that might conceivably be interesting.

One thing we can always be sure of: if it is meant to appeal mainly to C coders, it will fall flat, because everybody who is still using C has seen a thousand languages go by and passed on all of them.


I filter it a bit differently. There are enough people who will check out new language X; I don't have to. If it has merit, I'll keep hearing about it. Sure, I'll miss out on using X the first year or two out of the gate, but I don't actually need that. Yes, if X is all it's cracked up to be, I could have benefited from using it in that year or two. On the other hand, I didn't waste time learning a bunch of new languages trying to find the one that would be more useful than what I already have.


That is the "ignore everything new" choice. You are pointing out its lack of downside. On many days I would agree. (But you didn't. You spent enough time on it for this.)

If in fact the language would have turned out to be interesting, helping out early might make the difference between its fizzling out or getting traction. And, helping out might be a chance to learn a lot, or to ensure it will scratch your itch.

In this case, for me, none of that seems likely. As usual.


> But you didn't. You spent enough time on it for this.

Fair, to a point. But writing an HN comment is a bit lower investment than learning a new language well enough to evaluate it fairly.

> If in fact the language would have turned out to be interesting, helping out early might make the difference between its fizzling out or getting traction.

You may have that kind of pull; I don't. Nobody is going to care whether I like a language or not.

> And, helping out might be a chance to learn a lot, or to ensure it will scratch your itch.

That's somewhat more likely than me helping it gain traction. And, in fact, the earlier the input, the more influence it has at bending the language in the direction you want, because there's fewer voices at that stage. But there are too many potential languages for me to do that with very many of them, so we're back at the problem of choosing which ones to invest in...

(I once did take your approach, with the C++ STL, when was first announced on the newsgroups. It wasn't part of the compiler yet - it was a separate download. I found a bug in the initialization of the random number generator used to random-shuffle vectors. I, some random nobody on the net, emailed Stepanov and Lee, and got three fixes in the next two hours. I was amazed at the response. And the fix made it usable for what I was trying to do with it.

I think that's the only time I've invested in something brand new, though...)


> You may have that kind of pull; I don't.

I meant: one could do some of the work needed to make it ready for use. If not done, the language fizzles. Fizzling is the normal fate of any language, absent the miracle.

Hare looks a tiny bit better than C. That was D's problem: it was a tiny bit better than C++; now C++ is much, much better than what D had targeted. If you make Hare enough better than C to merit attention, it will be different enough for the C stalwarts to reject, but not powerful enough for C++ and Rust refugees to wash up onto.


This is an extremely selfish answer - by refusing to spend a few hours writing a front-page summary of your language, you're valuing the time of your m developers (where m is small) over that of the n developers who might want to evaluate the language (where n is large).

A language made by people who consciously make that tradeoff is almost certainly not worth even trying to learn. At least, if you're looking for a language that will actually act like a force-multiplying lever and save you time. Like, you know, programming languages are meant to do.


This is a gift to the world, so they aren't selfish. You are demanding, and you have no right to demand anything of random strangers on the Internet. Nobody is required to market their work as you demand, or even try very hard to seek new users. Often, slow growth is better.

Maybe someone not on the team will write a decent review of the language, and we will find out more. Hopefully that person will read the documentation and actually try out the language.


> You are demanding, and you have no right to demand anything of random strangers on the Internet.

In which case, the developers of this language are demanding that I spend my valuable time trying to read through their documentation to figure out if this language is good for me, in which case I reply: my time is also a gift and you have no right to demand that of me - I'm going to go and look at another language, and suggest that friends and fellow developers do the same.


Unless the front page points out at least something that is different or better about the new language compared to the alternatives, most people will not read the documentation.


Most systems don't merit reading the documentation; if it doesn't look markedly better, I'm moving on.


From my brief look at the documentation, the answers look like this: they have a defer statement much like Go's, they don't seem to have any form of generics, they don't seem to have any form of comptime logic.

They really need a section on the landing page to compare Hare to Zig. (And to C, for that matter.)


It seems that Hare has some restricted comptime logic. See section 6.8 of the spec [1]

[1] https://harelang.org/specification.pdf


> they don't seem to have any form of comptime logic

Or macros, not even textual C-style ones. Non-starter for me.


Macros were one of the biggest mistakes that C allowed, along with the undefined behavior.

Not allowing macros means a more standardized reference implementation, which translates to a more readable code overall.


Simple textual substitution macros are indeed pain.

Nicer, AST-level macros can be better, but can also be a pain (see Lisp, C++ for examples of either).


C replacements always seem to have noisier syntax, and not nearly enough motivation to switch from C. And if I could have been tempted with templates and a more intelligent type system I would have switched to a restricted C++ subset years ago.

And yes macros are dirty, but yet they're often far simpler than the alternatives, based on how languages like C++, Zig et al have attempted to reduce them.


This. I love my macros, even if many of them end up being replaced by code, and some of them only patch up shortcomings in the language.

There will always be shortcomings in any language, and there will always be situations that need to be fixed with a quick hack, even though that is not a long term solution.

And in almost any project I have a few lines of macro magic that almost completely fixes situations where I would otherwise have to resort to terribly complicated C++ templates and slow compile times.


This looks quite nice, actually. The negativity in the comments seems a bit heavy. I really like the idea of a language that is as (or almost as) simple as C but with many of the footguns removed.


Standard HN. Henry Ford and his faster horse would be spinning in their graves.

Another day another group of people criticising a language they’ve never tried.

If you paint a house grab a paint brush. If you need to hang a picture a hammer and a nail.

Writing a program? Pick the best language for the job.


How does this compare to Zig? They seem to share the same problem space.


Hare is much simpler than Zig and has a much different design, things like Zig's comptime is absent in Hare. Hare also has, in my opinion, a more fleshed out standard library than Zig. However, they don't compete in some respects: Zig targets nonfree platforms like Windows and macOS, and being based on LLVM gives Zig a greater range of platforms/architectures OOTB.

Andrew might be able to expand on this.


comptime is absolutely the biggest difference, in Zig you'll find high quality, performant generic hash map implementations in the standard library. In hare there are no generics and you are encouraged to write your own hash maps as needed: https://harelang.org/blog/2021-03-26-high-level-data-structu...


Gotcha. One thing that I really liked about Zig is its effortless interop with C. How does Hare compare in that regard?


Hare uses a superset of the C ABI and interop with C should be relatively straightforward. You can see an example with SDL2 here:

https://git.sr.ht/~sircmpwn/hare-examples/tree/master/item/s...

We have some planned improvements which will make this easier still. We'd like to improve the build driver's ability to link with C libraries via pkg-config, and automatically rig it up when you depend on C interop modules like hare-sdl2. Code generation to automatically provide bindings going either direction is also something we'd like to work on.

Zig goes, imho, a little bit too far. Hare has no intention of providing a built-in C toolchain, and does not treat C as more special than any other language for interop. It's a question of scope.


It's trivial for Hare programs to interop with C. All the programmer has to do is specify the libraries they want to link to.

Going the other way and calling Hare code from C is not that smooth yet, but there are plans to improve that in the future.


First off, congrats and thank you for putting this out there. Its a massive undertaking and I think the niche you are looking at is a valid one. At a superficial first glance, this looks very welcoming, and I'm definitely going to tinker with it. My expected use case will be embedded arm so I'll have to wait for support for that. And as many have expressed, my primary concern is safety and security, so tooling around proof assistants or formal verification are of interest to me. All of that takes time and community effort, I know. Good luck and thank you for giving the world years of your teams work.


I don't see any explanation of why this language exists.

Does it do something better than any other language?


My two cents: low-level programming languages (e.g. with manual memory management) are essential for tasks like writing operating systems, video games, real-time audio applications and other such things. There are few low-level programming languages available currently. The most used is C, and while it has its wonderful parts, C can also cause a lot of headaches and bugs. Whether C++ is a good replacement is a can of worms I won't get into.

Hare attempts to improve upon C while still remaining a very simple language, with much better error handling, support for arrays and slices, a rich but still minimal standard library, more concise and powerful function return values using tagged unions, improved memory safety, and other things you can find here:

* https://harelang.org/blog/2021-02-09-hare-advances-on-c/

* https://harelang.org/tutorials/introduction/

Disclaimer: I worked on Hare and this is just my personal opinion. I hope everyone can judge whether or not they like Hare for themselves by playing with the language!


Not trying to take away anything, but just from the top of my head: Zig, Rust, Odin, Beef lang and then don’t forget about those that even precede C like Fortran or Pascal.


I think any discussion of small, performant, fairly memory-safe languages could include Forth or Factor. Speaking of the Pascal family, we could also stand for someone to put time into a more modern Oberon or a standard subset of Ada.


As someone who thinks that Forth is an underappreciated language that is very good at some things, it's definitely not "memory-safe". Small and performant, absolutely! (good for bootstrapping a memory-safe language, also yes)


I don't think Factor is actively maintained.


It's difficult to compare Hare to every other language project at once, but the rationale is ultimately the same: we think we can fill different niches than the others.

If I were to speak generally about Hare compared to other efforts, I would focus on its simplicity and stability goals. It's the only new language in this space that's arguably simpler than C, in my opinion, and the goal is to provide a small, stable foundation that can be depended on for a long time, much like C is. Unlike many other languages in this space, its trade-offs also tend to position it to do anything C can already do - Rust has famous issues with linked lists, ownership when linking to things like libwayland, and they have their hands full getting into Linux; Go's runtime and GC rules it out for many applications; and so on.


Could you share any of your thoughts on comparing Hare to Zig? Zig seems to have the most similar goals to Hare but I think Zig is already quite complicated.


Hare is much simpler than Zig. The Hare compiler is 1/10th the size of the Zig compiler. The standard libraries, which I reckon are pretty comparable in terms of features, are again separated by an order of magnitude in size. Zig is also (presently) based on LLVM, which heaps on another huge pile of complexity, whereas Hare is based on qbe: 13,000 lines of C89. Bootstrapping Hare is also significantly easier and much faster than Zig.

Hare's design is a lot different from Zig's as well. Hare lacks comptime and generics, and does not target non-free platforms like Windows and macOS.

However, the target audience and supported use-cases for the two languages is similar. It mostly comes down to a matter of preference for most people.


> The standard libraries, which I reckon are pretty comparable in terms of features, are again separated by an order of magnitude in size.

I would be extremely surprised if this were the case, especially given that the Zig standard library is full of generics.


The example that hashes itself looks rather similar to Nim with the principal differences being that Hare is a little more verbose because it looks like you /have/ to fully qualify references to imported symbols and that Hare uses braces while Nim uses Python style indentation to delimit blocks.


Why do people keep choosing :: over simply .? How does adding more visual noise help anyone?


To distinguish between an instance and a namespace?

The same can be said for why do we need to put parenthesis after functions that take no parameters? So people still know it's a function.


But why not use a single character? With the double colon, something like foo::bar(xyz) looks like bar is “closer” to xyz, when in fact it's closer to foo.


A single ':' is used for other things, like casting types.

    let x: int = 0;
It's so that the compiler can distinguish between those two, and likely some other areas too.


A different character could be used, such as # (like in VimScript) or \ (like in… PHP?)


True, but :: is probably more readable than # or \ Design a language, and you'll understand that there's always trade-offs.


I personally find :: very unreadable, for the very reason I mentioned.


Why do you want to do that? Accessing a member of a namespace is conceptually the same thing as accessing a member of an instance, and in many languages like Python or Zig its literally the same thing too.


> To distinguish between an instance and a namespace?

I’m with OP here. C# proves that . reduces the noise substantially.


Only somewhat related, but when reading docs/comments that mention some method `class.method`, it's annoying when that's ambiguous and could mean either a static method or an instance method.

In the Ruby world, I think they have the convention of writing `class#method` in docs/comments when mentioning an instance method, and `class.method` when mentioning a static method.


Fun trivia fact: I tried to bring that convention over to Rust, but enough people didn't like it than in the end, we don't do that.


Interesting mention about the docs. I see what you mean. I have been doing C# for over a decade and I guess that after a bit you are just blind to this kind of stuff. I guess it never "really" matter to me when coding, but I can see it as an annoyance on the documentation side. Perhaps Microsoft can do a better job of splitting static methods from instances on the docs.

i.e https://docs.microsoft.com/en-us/dotnet/api/system.console.w...


I think the question should be less about ergonomics and more about does the distinction matter.

I don't know enough about C# to know if it matters. From everything I know, it's a really well designed language, so I'm guessing they made the right call.

It doesn't mean it's the right call everywhere.


Are hare's tagged unions and pattern matching comparable to ADTs found in functional languages? I really miss them in C and would like a "better C" that offers them. Rust does, but of course it's a much bigger language.


Hare doesn't care much about mathematical purity, and the features it does have were chosen on the basis of what is useful in systems programming.

There are sum types (tagged unions), but product types (tuples) were a late addition and we still don't have support for tuple unpacking or pattern matching on them.

Tuple unpacking is something that's been put off for very long but looks like it is going to finally happen soon. Pattern matching on types other than tagged unions hasn't been completely ruled out yet, but is also not guaranteed to happen.


They are a subset, I suppose. I'm not an expert at functional programming so I couldn't say. The tutorial which introduces tagged unions is here:

https://harelang.org/tutorials/introduction/#tagged-unions-i...

They are also covered in section 6.5.18 of the Hare specification:

https://harelang.org/specification.pdf

Would be curious to hear your thoughts on how they compare given your (presumed) background in functional programming.


The biggest missing feature here would be a way to not only select by type and for example allow something like

   type foo = (A int | B int)
Which from my cursory reading does not seem to be possible?


I'm not sure what you mean by "select by type". I can say that you can define new types:

   type a = int;
   type b = int;
   type c = (a | b);
Which seems to be what you're angling for here? I also suspect that you're approaching this from a Rust- or Zig-like background where enums and tagged unions are closely related; this is not so in Hare.


I do know those languages, but this is more the ADT way from ML.

The example you linked did not really make it clear from my point of view because one of the points is that it will collapse if there are multiples of a specific type. But your way seems to make it possible.


Tagged unions of free-floating types (polymorphic variants, C++ std::variant) and enum holding cases namespaced within themselves (Rust/Haskell) are two alternative designs. I personally prefer having both in a language. But if only one is available in a language, I prefer free-floating types, since it's more flexible and allows using the same type in multiple tagged unions, and you can somewhat emulate enums using tagged unions and namespaces (like I've done in C++ at https://gitlab.com/exotracker/exotracker-cpp/-/blob/eb8458b2...). If you take that approach, to prevent collapsing in the generic case when two type parameters are the same, you'd have to define newtypes for each type stored in a union.


Right - but defining a type alias makes a new type, which does not collapse.


Thank you, from the other replies it seems like they would be enough for my needs.

> Would be curious to hear your thoughts on how they compare given your (presumed) background in functional programming.

I'm just a student with an interest in programming languages and very little knowledge and experience, so I don't think I'll be able to provide any significant insight ;)


Why is opening file in "os" namespace but closing file in "io" namespace?


The I/O namespace provides an I/O abstraction that only knows about I/O operations, like read and write, rather than I/O objects.

https://docs.harelang.org/io

There's also an "fs" module which provides a filesystem abstraction:

https://docs.harelang.org/fs

The "os" module links these with the host operating system. It provides an implementation of the fs abstraction for the current working directory in the host filesystem, and also provides convenience wrappers like os::open, which calls fs::open with the host filesystem singleton.

https://docs.harelang.org/os#open


No metaprogramming? No thanks.

Metaprogramming is necessary in order to compensate for any missing language features, it doesn't add much complexity, and it's not here.


Just use m4 or gyb! It's the UNIX way.


> almost all programs written in C can also be written in Hare

Isn't that true of all languages?


I would struggle to write a bootloader in JavaScript or a database in Vimscript.


It's probably not true of crab language, at the very least.


Why?


Because the compiler won't let you, except under duress.


Let’s add that likely a significant chunk of those programs are full of memory bugs and/or data races. But with unsafe they are all possible to write. Though the reverse is not true, you can’t generally write SIMD-aware code in C without using inline assembly or some compiler extension.


What exactly are you thinking of? There are very few things I can think of that rust cannot do when using unsafe (And one of the biggest is interacting with safe code ergonomically because of &mut/& guarantees like the noalias/restrict attributes)


I can't find any information about concurrency. Do you plan to support async/await?


No. We prefer a more traditional style with event-driven I/O, e.g. via unix::poll. This should be more familiar to C programmers than to those coming from higher-level languages with async/await constructs.

https://docs.harelang.org/unix/poll

You can see a small event-driven server example here:

https://git.sr.ht/~sircmpwn/himitsu/tree/master/item/cmd/him...

https://git.sr.ht/~sircmpwn/himitsu/tree/master/item/cmd/him...


FWIW, frequently I've wished that languages had a version of https://www.chiark.greenend.org.uk/~sgtatham/coroutines.html built-in for implementing iterators, like Python generators, or somewhat similar to async/await. Porting the macros to C++ suffers because switches can't jump past declaring local variables with required constructors, and my hacks around this issue were ugly and error-prone. I haven't learned C++ coroutines yet, but they seem confusing.


We do not have first-class iterators, but we do have an iterator pattern which is common throughout the Hare standard library. As an example, here's how you can enumerate the nodes in a directory:

  let iter = os::iter("/")!;
  for (true) {
    const entry = match (fs::next(&iter)) {
    case let ent: fs::dirent =>
        yield ent;
    case void =>
        break;
    };
    fs::println(entry.name)!;
  };


I understand not having first-class iterators; Rust traits encourage using option/iterator combinator methods rather than imperative code, which I find unclear in nearly all cases (though iterator reduce() is better at finding the min/max of a sequence, since you don't have to initialize the counter to a sentinel value). I was discussing the use of coroutines for building iterators, for example implementing the next function for a recursive filesystem glob or B-tree map iterator or database table join, which I find miserable and bug-prone to write by hand.


I see that Hare exposes itself via the hare module in stdlib. Is there any plan for metaprogramming/macro (not preprocessor) in Hare?


No, there are no plans for first-class metaprogramming. The purpose of exposing the parser, type checker, module system, etc, in the standard library is to allow for users to more easily build custom Hare tooling. The closest thing you could get to metaprogramming with this is some assistance with code generation.


That's unfortunate, some metaprogramming could be fun and without generics and such quite simple and efficient even.


Totally agree that metaprogramming is fun. But I don't think it's a recipe for simplicity and robust engineering, and that's the priority with Hare.


> Is there any plan for metaprogramming/macro (not preprocessor) in Hare?

We don't plan to add anything like that. Hare strives to be simple and straightforward. Metaprogramming would not fit it well.

Closest thing in this direction that we tried was reflection, but that didn't turn out to play along with the rest of the language either, so it was removed.


Can you explain how metaprogramming is not "simple and straightforward"?


Initial reaction is positive, but as with Blow's Jai, it seems like a programming language for the authors, and there's nothing particularly wrong with that; the ideological stance re FOSS, probably excludes it from any general purpose conversation, which is disappointing. Though, nothing stops interested parties from forking to support proprietary platforms, although the outcome may be disappointing 4 the authors if such a fork picked up a zig head of steam.


I am very pleased to see this out in the open!

Looks like a very simple, yet readable little language. Reminds me of C's glory days.

I know that we already have a couple of "better C" languages out there, but Hare seems like it truly grasps the simplicity of it's older cousin.

I love the focus on open-source only, by the way. It will keep the spirit of the language. I just hope that Drew will stay on this project for a while longer. A language with this kind of vision must persevere.

Good luck to Drew and the Hare team!


A big part of any new language taking off is a rich ecosystem. If you're looking for a lib to color your terminal output, consider mine? [0]

0. https://github.com/tristanisham/color


It looks like self-hosting is still WIP. Do I have that right?

  * https://git.sr.ht/~sircmpwn/hare/tree/master/item/cmd/harec
  * https://git.sr.ht/~sircmpwn/hare/tree/master/item/hare
  * https://harelang.org/blog/2021-03-14-a-self-hosting-toolchain/


Yes

There is one more significant (but not user-visible) change planned for the language before we continue working on that.


What is that?


Code dealing with tagged union matching is going to be completely rewritten and support added for match exhaustiveness checks and matching on pointers to tagged unions without copying the tagged union and its values.

The ability to do the latter is expected to result in a significant performance improvement and stack usage decrease.


Self-hosting is a very last-millennium form of wankage. Back then, a compiler was a pretty significant program, and compiler construction tools weren't mature.

Nowadays, self-hosting your compiler doesn't demonstrate much of anything. It takes a lot more than a compiler to prove anything meaningful about your language. Time spent on self-hosting is mostly just time wasted. It mainly suggests you were not really serious.

Make a front end for LLVM, and get on with things.


You are missing one subtle but important flaw in your dismissal: not self-hosting means that your compiler will only be developed by people willing to write C/C++ code, which has practical implications as well as taking away some of the punch from your effort to improve on the status quo.


Self-hosting will only distract you from any "effort to improve on the status quo". And, do you really want people working on your compiler who can't even cope with C++? (BTW: There is no such language as C/C++. Clang and Gcc are both coded in C++.)

Your new language desperately needs libraries that bring it up to a level of practical usefulness. A compiler front end is about the least-useful code you could write in it. Taking a long detour for that says something, but it is the wrong thing.


> do you really want people working on your compiler who can't even cope with C++?

That's the whole point of creating a new programming language.


If you are serious, you want the people working on your compiler to have been doing paid work, and to have demonstrated some capacity for abstract thought. Your language might not be meant for those people; Go and Java, famously, weren't, but demonstrated a language for "the rest" had a ready audience.


> Self-hosting is a very last-millennium form of wankage

You're reading way more into my comment than I wrote. I asked a question.


I'm not reading anything into it at all.


Tagged unions look simple and useful, but not very compact. As someone who has dabbled in scripting language interpreters, it be would nice to see built-in support for something like NaN-boxing if you're doing a union between pointer types, smaller integer types, and doubles. But I suppose you could it yourself like in C.


The issue with representing them more compactly is that we'd have to make the rules much more complex and we'd also have to depart from the C abi, so while this is something we might look into in the future it's not very likely to happen.


Why have both switch and match statements?


Good question. The answer is that Hare does not have real pattern matching as you may understand it. The switch statement switches on values, and the match statement switches on types. Merging these faces two problems: one is that the grammar does not support types and values appearing in the same place, and the other is that it would imply either dependent types or a hack similar to them, which we're not really interested in. But I can see that on the surface, the distinction appears a bit arbitrary.


Lack of macros in recent systems languages like Zig/Hare/etc seems to be a design flaw, a handicap marketed as safety feature or bargain for clarity. If a language can't replace C macros, it cannot replace C.


C doesn't exactly have macros. That's a text substitution done by the preprocessor in its own limited, special-purpose language. True hygienic macros of something like Common Lisp are a different beast.

If all you want are textual substitutions, you can use the C preprocessor in front of any language.


C macros are as abstract or even more abstract than hygienic macros https://github.com/BlueFlo0d/CSP https://github.com/FrozenVoid/C-headers/blob/main/argmanip.h


> you can use the C preprocessor in front of any language.

You will be paddling upriver if that language doesn't have a mostly C-compatible token structure, or has white space sensitivities that C preprocessing doesn't preserve and such.


The C Preprocessor has sensitivities that aren't matched by C in all cases. And in the case of Hare, the tokenisation should be close enough to C that the CPP would only break in the same ways it already does.


Common Lisp does have macros, but not hygienic macros.


Thanks for the correction.


What necessary use case of macros isn’t addressed by comptime in zig?


zig comptime is the same as constexpr/consteval in C++ which operate on variables and valid code blocks.

C macros operate on arbitrary tokens, which get converted to code or constants. For example variadic functions can be implemented to act on token arglists that act as abstract tuples: https://github.com/FrozenVoid/C-headers/blob/main/argmanip.h


I see. The parts of this header file that cannot be trivially ported to Zig are those that involve conditional evaluation at runtime. You can use comptime to include this sort of sub-language in the language (and this is how e.g. std.fmt works), but I'm not sure if one can achieve the level of tight coupling with the outer language that you have.

It's an explicit goal of the language that you can tell what the local control flow is by reading the code, but of course one is free to run the C preprocessor as part of the build if one disagrees.

I don't think C++ constexpr/consteval can be used to write things like generic json serializers and deserializers, which Zig does in the standard library using comptime, but I'm not sure.


Well, to the point of the submissions - Hare doesn't have anything comparable to C macros, zig comptime, or Lisp macros (and probably never will[1]), so it doesn't matter what flavor of metaprogramming you want - you're not going to get it.

[1] https://news.ycombinator.com/item?id=31152272


I can't really judge, I've barely tried Zig but... Maybe X-macros? Maybe compiler-specific attributes? Maybe reducing some syntactic clutter?


Could you expand on this? I'm really curious why you think so. I have written and worked with C codebases that do quite well without using macros at all.


It is a general capability to modify C code at will, example: https://www.chiark.greenend.org.uk/~sgtatham/coroutines.html


It could be portability stuff, like compiler-dependent attributes.

It could be fixing shortcomings of the languages without a huge complicated system that needs to be built into the language (and if it isn't, you can't just switch languages).

It could be dirty hacks that abstract a problem at the syntactic level (this is the kind of macro that is most likely to be replaced).


You might want to learn more about Zig's comptime.


I don't think many Zig programmers miss C macros. Comptime is extremely powerful.


C macros are powerful enough to create entire functional languages at compile time, which is far beyond any constexpr functions. https://github.com/rofl0r/order-pp


If it's at compilation time, you have the full zig language so you certainly could implement your very own functional programming language https://github.com/igmanthony/zig_comptime_lisp in regular zig rather in an external preprocessor.


I do think comptime is worth a look for you, even if my comment elsewhere in this thread admits that it can't quite do everything you do with C macros.


Will wlroots move to hare? Or will bindings be written and maintained?


I have no plans to do either. I can't see wlroots being rewritten, but I could see someone writing bindings for it. Hare does not make C obsolete, there's no reason to rewrite old code imo.


Gotcha. Looking forward to seeing it develop and see what you build with it.


Question: is it easier to implement a language that has manual memory management or a GC, or does it not really make a difference these days?


Is there no RSS feed for the blog? I tried finding a link but failed to do so. Hopefully I am just blind.


There is one. The first link when you open the blog.

https://harelang.org/blog/index.xml


Wouldn't it be faster to write things in Tortoise?


Probably, but in Tortoise stack overflow is a real issue.


The Tortoise stack is infinite


It’s hard being a full-stack Tortoise developer.


But at least once you're there, it's Tortoise all the way down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: