Hacker News new | past | comments | ask | show | jobs | submit login
Will Hare replace C? Or Rust? Or Zig? Or anything else? (harelang.org)
195 points by wicket on May 3, 2022 | hide | past | favorite | 443 comments



I think if you want to compete with Rust, C or Zig you want to have a rich standard library. I still hold that Go's success is attributed to the fact that you could build a web application out of the box, minus database drivers. The templating and web server is purely built in to Go itself.

I don't have to waste time evaluating web frameworks I can just start coding a website with net/http right away. Oh its production ready!? Amazing!

I do wish Go had more things built-in like email and other protocols.

If I were maintaining or creating a systems language, I'd invest in the standard library being powerful out of the box. I really secretly wish Rust / D had a web server baked in sometimes.

All things said, this is the first I've heard of Hare, and it looks nice to me from what little I've looked at.


> I still hold that Go's success is attributed to the fact that you could build a web application out of the box, minus database drivers.

I attribute Go's success purely by it being pushed by Google.

If we use Rust as an example, it's super easy to include high quality libraries (just add s line in Cargo.toml and you're good to go).

Another example is Ruby and how it was made popular thanks to Rails, or how Elixir is making a splash in web dev thanks to Phoenix.


> If we use Rust as an example, it's super easy to include high quality libraries (just add s line in Cargo.toml and you're good to go).

Assuming a web server—Which library? Which version? How many commits does it have? When was the last one? How long will it be supported? Is it async? How many deps will it pull in? Will it be superseded by a fork? Does that fork have a different API? Will I have to bump the Rust version to match it in the future? Ok, I've gone to another team and their app uses a completely different library, what's the answer to all those questions again? Etc.

With Go, the answer is and always has been:

  include "net/http"
I love Rust, but its DIY libraries can be quite a barrier to overcome when starting a project.


I'm not quite ready to announce it yet, but I'm working on a site that will provide a curated guide to the Rust ecosystem. There are packages that are well supported and de facto standards, you just need to know which ones they are (in this case the answer is "axum").

In the future the answer may be go to <SITE-URL>, look up the HTTP server category and either read through a few one-line descriptions or just use the recommended package.


That is great, and as someone just starting to dabble with Rust I'd love a resource like that.

But: no matter how well-curated your site is, and no matter how quality those libraries are, and how well maintained, "batteries-included, in the stdlib, backed by a compatibility promise that the dev team is willing to uphold" carries some serious weight.


It does, but it's not without its downsides either. stdlib modules might end up being abandoned or deprecated because the design can't be changed and it doesn't work well anymore or there are unfixable security issues. Python's stdlib has plenty of such modules for example.

Considering that the Rust dev team is mostly volunteers, and many of the most important non-std-lib libraries in the Rust ecosystem are maintained by those same people I feel like there's less of a distinction than there is made out to be.


How do I get notified when this is available? Sounds great :)


If you email hn@my-hn-username.com then I will be happy to notify you. Otherwise I'll probably submit it here and on /r/rust. Whether it makes the front page, who knows.


signs that things might not be that easy, i'd say that the answer is rather "actix-web". axum has tokio's momentum, but a much shorter track record than actix.


So "web servers" is a particularly competitive category and there are number of options that are all solid choices. For most categories I have 1 or 2 options with 1 recommended. For web servers I have 6: axum, actix-web, warp, rocket, tide, and poem. With a couple of sentences for each explaining why you might or might not want to choose that option.

For me Axum should be the default recommendation because it's the least quirky option. It's also a very thin layer on top of hyper, and built by the tokio team, both of which have a very long track record.

Actix-web is a fine choice, but it's a bit of an odd duck in things like not being based on hyper, which makes it a little harder to integrate with the rest of the ecosystem.


thanks, i appreciate the detailed response. I do believe, though, that "least quirky", while definitively important, is not the most deciding factor here.

Axum is very young (was announced <1y ago), it has good momentum, is based on hyper and built by the tokio team. On the other hand, it has few actual projects built with it, I couldn't find a guide for it like actix or even Rocket have (it has some documentation in its docsrs documentation, but it is pretty minimal). Crucial questions like how to handle configuration, integration with database, autoreload, are left to the examples at best.

I have no doubt that these things will come if the framework matures. I'm just questioning if it should be the default choice right now, as opposed to in a year.

(Btw, I would absolutely recommend against Rocket right now, even though API wise it clicked the best with me, until it gets more maintainers and development resumes)


That's definitely a fair point on the documentation for actix-web being better (and rocket, but I agree with you on Rocket's maintenance status being problematic). Once I've got my site into a launchable state, my intention is to open up contributions on the github repo so that we can capture the wisdom of the community from all it's diverse set of perspectives. I definitely don't have all the answers myself.


A curated list doesn't have to be "The Best(tm)" in every choice. It just has to recommend one of the top 3 for every choice and never recommend something that sucks.


The issue with Go is that "net/http" is rife enough with footguns that for example Cloudflare has en entire blog post on how to handle timeouts. The defaults will cause production issues.

https://blog.cloudflare.com/the-complete-guide-to-golang-net...


I’m mostly an infrastructure guy. I mostly fix C or identify bugs, and write integrations in Go or Python.

But I spend probably 50% of my time dealing with production problems due to this in Java business applications.

Nearly every client library in the Spring ecosystem has the worst of all possible defaults, and all internet code examples are “look at how easy X is” without anything around making it real.

Devs chuck code out without HTTP connection pooling or pipelining requests, let alone reasonable timeouts. Hikari has an awful default DB pooling config, and on and on. Inevitably it gets reported as “latency” when a quick look at [APM tool] shows all the time is spent waiting on a connection, or a stack trace shows which library chucked the timeout.

And often devs just bump connection pools higher when they see a bottleneck and there’ll be 1000 idle connections wasting DB resources and not helping the problem (inefficient query, missing index, etc).

I complain because the defaults here are masochistic. But there does need to be a cultural change where using a client or server library assumes a tuning exercise. And yeah, good testing would be great too.

There are no 1-size-fits-all values. You have to look at the use case and the SLOs of the app and it’s up and downstream dependencies. You don’t want optimizing a config in isolation to break a larger system either.

Software development is hard. It requires understanding. No language or library can solve that for this stuff. I just wish all of these libraries’ docs called that out up front vs projecting plug and play. It’s all there, but buried in the godoc for you to have to know what you’re looking for. Others are worse.

This is especially the case with http libraries, since it’s usually building a client or server object/whatever correctly and not just a config tweak you can slap in at 2am.

And back to the original point, when there are 12 implementations that can mean having to relearn this activity for each one. That said, if I’m using Go, that usually means building something with net/http and handing it to the library to use instead of the default. Other languages without that base layer baked in don’t get that advantage.

Back to Java/Spring: usually Catalina vs netty/Reactor Core are 2 totally different worlds and often dev teams don’t know which they picked or that they switched when they generated a new project and pulled their code in from Spring Boot 1.5 to 2.x.


When you code go by sticking to the standard lib like that, that the friction of updating becomes as smooth as compiling with the new version and deploying. Really quite sweet.

You -CAN- go the route of using a well supported, well maintained library but I have seen some code rot on larger projects in the go code ecosystem for stuff that tried to include more batteries.


I'm on the opposite side here, the slim standard library is something I enjoy with Rust. I can pick and choose from a multitude of different libraries with different trade-offs, strong suits and so on without being locked to one single implementation. I don't know what the guarantees for the standard library for Go is, but Rust's is very strict, you better be damn sure the design once something goes into `std` is near perfect and thought through, there are already mistakes, deprecations and design mistakes in Rust's standard library that will forever be there. Similar to how Python has both `urllib` and `urllib2`, and others like it, I'm of the opinion that a big standard library is where good libraries go to die once the stability guarantees come into play; though being able to do so much when scripting with Python is certainly a boon.


On top of all those questions, how safe against supply chain attacks is the library?


> I attribute Go's success purely by it being pushed by Google.

At the beginning Google's support really made a difference, but the continued traction of Go is really the result of it being a great tool to build network servers where you want a high level of performance.


> [...] being a great tool to build network servers where you want a high level of performance.

This was the case from the Day 1. Google strongly shaped the language's direction and that has a continuing effect on Go's success. Put it differently, if somehow we end up with the future without servers Go will have a very hard time.


Had the Kubernetes team not rewritten it from Java into Go, it would had surely been a different outcome, specially taken into account the Mesos ecosystem originally.


If you actually needed performance you couldn't use a garbage collected language.

Golang isn't really faster then any other on the same playing field (compiled and with GC).


I think "high level of performance" encompasses a wider range of performance levels than you're thinking.

Go will be more than fine for the vast majority of servers people need to build. Sure, there are some things where Go might not fit the bill, but that doesn't really matter; Go is still super relevant for writing servers, just as Java (another GC'd language that some people call "slow") is too.


I completely agree, but the "high speed" claims by go enthusiasts are really tiresome, because the language is not particularly faster then any other that's compiled and has GC.

There are very few languages which are categorically slower then Golang, and these are generally languages which explicitly accepted slow executions for ergonomic reasons such as ruby and python


> If you actually needed performance you couldn't use a garbage collected language.

This is incorrect; HFT platforms have been written in Java for example (https://news.ycombinator.com/item?id=24895395). But it's about either tweaking GC settings or mode, or avoiding using the heap entirely if you can.

Go is still slow if you (over)use the heap, but at least it gives you the choice and plenty of tooling to analyze it.


HFT’s have the luxury of a 9am-4:30pm ET trading window and memory being relatively cheap. But things like self driving cars don’t have that luxury.

I really like the arena libraries you can use in Rust where you can have logic to free untouchable memory at a specific time.



Isn't it like... literally just that one HFT company that did this? I've seen it pointed at for years, but afaik the vast majority use C++.


No. There's one that made noise about it, but I personally know more than one other from working on their code. Not going to name anyone because I'm not sure they want their tech decisions publicized.

Also some performance-sensitive chunks of wall street itself (depending on how you define "wall street itself". And "performance sensitive" (what I worked on was more "high throughput" than "no pauses".).) run on java.

TBF, I write a lot of java, so my perspective is probably biased, but I can assure you, it's out there.


Many companies did this in the past. Nowadays Java, Go, etc GCed languages are still used but not for the hot path anymore. They are a couple million times slower than what we need.


The usual cargo cult against automatically memory management systems.


Compared to Python, Ruby and Javascript, Golang is high performance. Compared to C, C++ or assembly language, you sacrifice ergonomics for a bit more performance.


https://www.techempower.com/benchmarks/

js actually beats Golang on most metrics if you're aiming for synthetic "web" performances.


GC is actually a hard requirement for some problem domains such as anything making heavy use of general graphs. (One well-known example of that is GOFAI applications, which are commonly implemented in LISP.)


> GC is actually a hard requirement for some problem domains such as anything making heavy use of general graphs.

This is manifestly false. I used to write general graph analysis kernels on supercomputers. We used (old-style) C++ and never had an issue with memory management. Your assertion is assuming a naive implementation that would have offered poor performance regardless.


Yes, yes, but not everyone is running on a supercomputer. Sometimes you need to be able to reuse memory by freeing up unneeded objects.


You should be doing that with or without a GC… unless you’re running on a missile


There is no implication that memory is not being reused. You seem to be assuming how graphs are represented in large-scale systems which are not actually correct in practice.


> I attribute Go's success purely by it being pushed by Google.

I would say that having Ken Thompson and Rob Pike's names attached was far more significant than Google. More than that, though, it was successful because it solved problems people had at the time. The solution space is much larger today, but at the time Go was quite unique in filling its niche.


It probably helped, but it is not at all the reason why it got that traction

For comparison see Dart which was pushed even more by Google (I still remember when we were being inundated by "dart is soon gonna replace javascript everywhere" posts/videos) and now has not even remotely the userbase of Go and it's just a niche language for its specific framework Flutter


TypeScript managed to be more successful than Dart in the whole "compile-to-JS" space.


Politics, the team behind Dart dropped it on the floor, and most of the original team left Google.

It was only rescued by AdWords team , and then Flutter team picked it up.

The Android team is also not in love with it, and Jetpac Compose is surely an reaction to it.


> it's super easy to include high quality libraries

Which add +100 dependencies, many of them 0.x and constantly changing. I remember updates where I clicked through the docs of three different crates to decipher cryptic compiler errors. And don't get me started on getting started with Rust - Go is much easier to learn.


Dart is also a Google conceived language but it’s not exactly what I’d call popular despite only hearing positive things about the language itself.

Unlike Swift or Kotlin that are heavily entwined with the main mobile platforms, nobody needs to use Golang for any particular reason. As far as I can tell its developer experience is pretty good so people have opted for it because it fits their needs.


This is a good example of why a big name doesn't guarantee adoption. Look at failed Microsoft development stacks too, and hell JavaFX had potential but I have seen zero effort from Oracle in terms of UI libraries. It's a shame, if Java could produce a serious UI solution I would of crawled back long ago. I can only hope Maui runs on Linux eventually.


> If we use Rust as an example, it's super easy to include high quality libraries (just add s line in Cargo.toml and you're good to go).

Is this a joke? How the hell would I know which out dozens of library is complete, safe, well documented without significant research?


Agree about Go and Google. With Google at your back, the question changes to how could you possibly fail? Not to say it's impossible, but is to say it's quite hard to do.


True. That's why I have never ever seen any failed product that is launched by Google.


I don't agree. Go being created by the Bell Labs legends, the same people who fathered C, Unix, UTF-8, Plan 9, etc., was what caught attention. Google's involvement was downplayed, and, in fact, at the time Google people were all like "We don't use Go or even like it."


I don't totally disagree with your point. Ken Thompson is a legend, by himself. However, neither he nor the other creators made their own language, without backing. We will likely never know if by themselves, the Go creators would of had a hit. Even when it comes to talking about a legend like Ken Thompson, he still was working at Bell Labs (AT&T), and had that huge company behind his work.

Go was created at Google, and prominent among the language's goals, is that it serves their purposes.


Connected to that, it's easier for Google (in tandem with YouTube) to destroy or inhibit competing programming languages as well. Quite easy for them to depress views, spread or emphasize misinformation, etc... The power of Google to influence how people see things, is not something to be underestimated, even for what might seem minor or beneath them to do.


Recently I've been writing a good amount of rust, looking for libs to help out, and have been pleasantly surprised at the depth of the Rust standard library at this point!

I started trying to use something like tokio for async stuff, but found that Rust's standard lib threading primitives were already nice and numerous enough to get a lot of work done.

The rust programming language book has a nice little project of building a multithreaded HTTP web server from scratch. Of course you don't necessarily want to do this for everything, but the fact that a lot of the primitives are present and with good abstractions is commendable!

Go's stdlib stuff for web servers is obviously much easier to get rolling though.

https://doc.rust-lang.org/stable/book/ch20-00-final-project-...


That's amazing! I have the Rust Book but never finished it all the way through, I'm definitely looking forward to this, I had no idea it takes full advantage of other packages.


It does not (well, historically it does not, I haven't checked it out in a while) use other packages. You build one yourself with the standard library. It's expressly about learning via a bigger project, not something you'd use in production. That'd be a waste, just use axum, and focus on writing your application, not on building a (very minimal and incomplete) http/1.1 implementation.


That's fine, it's the kind of project I've been meaning to do for myself, so wherever it leaves off at... chances are high I'll continue for my own personal learning.


Go has other things. Fast compile times. I can ship a binary, in place of Python, which requires a Python environment with all the modules I need, or a container, what have you.


I can't speak for Rust or Zig, but for C I don't think this is true. If anything it's the opposite -- the C community is constantly writing replacements and enhancements to the standard library. C lacks good support for modules, but has excellent support for libraries installed on the OS level.

The ability to have all the bells and whistles out of the box may be a lot of the appeal of Go, and is certainly a lot of the appeal of languages like Python. But it's not core to the ethos of C.


Go's standard library is quite far beyond C and C++ in terms of features. Neither C nor C++ as far as I know have a JSON parser, CSV parser, language parser (Go for Go, but C for C, C++ for C++), web server, crypto libraries, and so on.

Though C++ and Java both probably have more container/algorithm libraries builtin than Go does.


> Neither C nor C++ as far as I know have a JSON parser, CSV parser, language parser (Go for Go, but C for C, C++ for C++), web server, crypto libraries, and so on.

people already complain constantly that the C and C++ standard libraries are bloated though


Who has complained that the C standard library is bloated?

In C++ the standard library has several entire features that are regrettable and so just take up space (e.g. regex, yes regular expressions are useful, no the standard library implementation isn't what you want), as well as places where they kept thinking of new better ideas and now there's a trail of old busted stuff you shouldn't use in new projects.

So of course people are going to say that's bloated, it is.


> I think if you want to compete with Rust, C or Zig you want to have a rich standard library.

C's standard library is rather spartan and includes some security issues. What C does have is a) interoperability-friendliness, b) a vast ecosystem.

People should learn lessons from C++'s success and understand how important it is to be able to easily consume C libraries.


I think that Hare has a pretty rich standard library, though balanced with a constrained scope which prevents it from growing without bound. Note, though, that web programming is not really a domain that Hare is designed for, so we're not too concerned about being able to pick it up and write a website with it.

You can check out the stdlib here:

https://docs.harelang.org

Let me know if you think this strikes the right balance.


Personally, I find it hard to appreciate the stdlib when every identifier is optimized to use the least amount of characters. There's a module called `shlex` (what does that mean?), there are two types called `pwent` and `grent`, a type (can't find it again) has a size field called `sz`. Various types have single letter fields. There's a `chown` function under `os` and another one under `fs`. The "interface" for streams is defined by the very generically named type `vtable`. There's also a dedicated module just for creating a temp directory? There are dedicated modules for `linux` and `unix`, but then various os level types are defined in a catch-all module called `rt`.

Also, no data structures? I don't see maps, arrays, sets. No support for custom allocators (where's the `mem` module)?

The roadman mentions TLS, smtp, sql and http (no json?) but you're not too concerned with people wanting to use it for web development?


Many of these names come from Unix, and we chose not to make an opinionated renaming of something that most C programmers are already familiar with. Regarding rt - read the docs, it's a low-level module which is not meant to be used by most users.

Arrays and slices are built into the language, sets and maps are not. Much like C, though slices are new to Hare.

I like the concise names. It's a matter of taste.


shlex means it's like the Python shlex module.

pwent and grent have been called that for 46 years, though usually only in the names of functions: https://www.freebsd.org/cgi/man.cgi?query=getgrent&sektion=3

Similarly, chown. Or is your complaint that there are two different chown calls?

Trying to rename these things would make the library a lot harder to understand. Can you imagine if someone decided to call the USA "Samland" because they thought the three-letter identifier was too short? Would that reduce confusion?

I kind of agree about vtable, though.

Arrays are part of the base language, not a module. Not sure about sets and maps.


If the short identifiers sit within a namespace, it's not a big issue. This is not C. It's arguably preferable to having them be too verbose.


I really appreciate that Hare already has high-quality documentation (in terms of tutorials, language reference, standard library, etc.), which you usually don't get to see in other work-in-progress languages.

(BTW, https://harelang.org/documentation/ is the link for more general documentation about the language.)


Thanks!


Problem with that approach is best demonstrated with Python.

Standard library usually comes with strong backwards compatibility guarantees. So it becomes ossified over time. Leading to - standard lib is where libraries go to die.

Without stability guarantee, you risk introducing breaking changes between versions, which is worse. And inferior to just downloading a lib with most stars.


There are three approaches to standard libraries:

1. Include as much as possible. This has been Python's approach and is showing its limitation: you end up with lots of historical baggages you can't maintain. It took an eon to remove some dead batteries from the Python standard library [1].

2. Exclude as much as possible. This is the preferred approach for most newer languages, which can't readily determine at which direction the language would be heading. These languages tend to weigh more on third-party libraries with their pros and cons.

3. Pick use cases and design the standard library only for those use cases. It would mean that you have a near-total control over the language's evolution, so you can include whatever you want without thinking about its long-term consequence (because you know a new addition would be necessary for your use case). This can be often seen from some smaller languages, but Go is in my opinion the only mainstream language using this approach and that's only possible due to Google's involvement.

[1] https://peps.python.org/pep-0594/


With the benefit of being available everywhere CPython exists, third party is hit and miss, and maybe require C or C++ compilers as well.


"I think if you want to compete with Rust, C or Zig you want to have a rich standard library"

I don't think Drew wants to compete with any of these languages. That's the point of this article.

Let's give this one some time before ruling it out right at the gate.


Then why is C# or F# not more popular? Microsoft ships a huge array of first party supported packages that include low level protocol libraries.

I have yet to work at a Silicon Valley startup that used C# or F# though, despite it being a pretty solid choice in terms of "one language to rule them all" kind of thing like Kotlin / Java and (to maybe a lesser extent?) JavaScript.

I respect the tools and the language Microsoft has put out there, I'm not saying its objectively better or worse.

Especially F#, which is, to interject my opinion, an incredibly productive language on the same level as Go, and you get access to the rich .NET ecosystem out of the box.


To give a slight clarification (though I ask the exact same question every Who's hiring thread time of the month) I think "not more popular in the startup bubble" is more accurate.

As far as I'm aware Java and C# are still first/second by job listings in both the US and UK but in significantly less sexy enterprise settings. This might have changed since python became more widespread though.


I think .NET is too recently cross-platform and MS isn't yet doing enough to promote them.

I think C# and F# will get significantly more popular over the next 5 years.

And if this doesn't happen it will have been pure mismanagement of opportunity by MS.


Try east coast companies instead.


> I think if you want to compete with Rust, C or Zig you want to have a rich standard library. I still hold that Go's success is attributed to the fact that you could build a web application out of the box, minus database drivers. The templating and web server is purely built in to Go itself.

Rust doesn't seem to have that rich of a standard library, bunch of the official documentation asks you to add crates for lots of things. I find myself reaching for external crates for tons of things I'd expect to be in the language as well.

I also don't think Go's success can be attributed to one single thing, but probably the biggest attribute would be that they are backed by one of the biggest tech company in the world, who have adopted it wholesale and would do so, no matter what, since they built and control it.


> I still hold that Go's success is attributed to the fact that you could build a web application out of the box, minus database drivers.

May, partly, sure...

I attribute Go's success due to the simplicity of the language. All successful languages tend to have one thing in common - they are extremely easy to read.


It also has some really unique meta aspects to it which are the appeal for me.

Forget about needing to decide on a web server library because there isn't one in the standard library. With C++ you have to decide which parts of the language itself you're going to use. Trying to learn is incredibly intimidating, because you look up how to do something simple and there's 17 unique answers with a million opinions on which is best, and which will lead you into a nightmare of unmaintainable code.

Go was built on the principle of being incredibly picky regarding what features are added to it. It's best assumed that any new feature that's proposed to be added will never be added, and until it goes through a heavy pitching process and gets hard-earned approval, it won't be there because the language has worked well enough without it up to now.

Having the formatting forced into the source code is genius too. Whether or not I agree with the prettiness of every formatting decision, I'm just happy it's there, because it's less decisions I have to make, and no matter whose code I'm reading, I know it'll be formatted in the same way as everyone else's.


I think there must be more to it than that. Rather, I'm sure it's necessary but I'm not sure it's sufficient. C# has all of that built-in, and actually even more, but I don't see that it's enough by itself to become broadly popular for web development.


Someone said Go isn't really a programing language so much as a DSL for writing web services. I think they meant it as an insult to Go, but I actually think it's Go's big strength. That plus having native concurrency explains a lot of Go's success. Being backed by Google doesn't hurt either.


D's BetterC is is a modern language that is very C like and familiar, but with modern programming language sensibilities. We're currently working on improving the ability for it to directly import C code in order to provide frictionless access to existing C code. It's much like C++'s #include, except it uses modules instead of a preprocessor.

This way you won't need to give up on any of your existing, working, debugged C code.


>that Rust is probably the better choice for high-stakes use-cases such as life-critical software

I would say SPARK2014 or straight up Ada for life-critical software given the maturity and target markets. Rust is heading there, and there is some communication between Adacore and Ferrous Systems (Rust group)[1] taking place that gets me excited, but I will stick with SPARK2014 for now.

[1] https://blog.adacore.com/adacore-and-ferrous-systems-joining...


> Hare aims to be successful within its niche for the programmers that find its ideas compelling

Honest question, what are the compelling ideas? I really don't mean that with any snark. I haven't read through everything on the site, but so far I see:

- Uses the `let var: type = value` syntax

- Infers `type` from the value when it can

- Has arrays and slices of arrays

- Uses `defer` for cleanup

- Doesn't have a garbage collector

- Imports using `use` (vs `#include`)

- Module scoping with `::`

- Has `match` (not sure about patterns/destructuring)

- It looks like `yield` is a return from sub-expressions

edit: some more...

- tagged union types

- non-nullable types (opt in)

- utf-8 strings


To me, Hare's greatest strengths are in its simplicity, error handling, the standard library, and its documentation. Hare also has a culture of careful and deliberate engineering which values correctness, completeness, and predictability.

I asked the #hare IRC channel to add their own thoughts, will edit this comment with any answers:

> Simplicity.

> I think a strength is the plan for long term stability.

> It's simple and the compiler is tiny.

> First thing that comes to mind is error handling.

> The syntax seems simple to me, like c and go and unlike c++ and rust.

> I think a major strength is you can transfer your C expertise.

> You don't have to worry about your post-1.0 code breaking or becoming obselete from new language idioms.

> Clear communication from the hare dev team about what the project goals are.


I am really curious why “println” lives in “fmt” module. I get that it does string formatting, but it also prints, and I would expect “fmt” to only do formatting. This sours my first impression of the language. It makes me expect lots of other counterintuitive decisions in the language.


If you have a formatting operation that outputs some number of chunks of text to a stream, using some dynamically dispatched output operation which does something similar to writing to a buffered output file, you can use it to produce a string fairly easily, as long as it fits in memory: you implement an output operation that appends to a string buffer. This is close to optimally efficient except when appending characters one or two at a time, in which case the extra dynamic dispatch overhead may be significant compared to whatever else you're doing to choose those characters.

If you have a formatting operation that produces an in-memory string, you cannot use it if the amount of text being produced is too large, or if you do not know if it is too large. But if it is not too large, you can then output the string to a buffered output file, although this is less efficient because it involves an extra copy and potentially dirties a lot more memory.

So these two ways of handling formatting are easily interconvertible when memory permits, but in some circumstances only the first one is applicable, and it is always more efficient.

Therefore, when reliability or efficiency is important, generating a formatted string should be implemented as a layer on top of sending formatted output to a stream, not vice versa.

There are cases where reliability and efficiency are not important, because your code is being used by end users and not as part of a library, and an error message is also an acceptable result of trying to run your program, and in those cases you should probably write your program in a language like Lua, Python, or JS.

It is probably true that if you are used to languages like those, a lot of design decisions that are necessary for reliability and efficiency will be counterintuitive to you.


So, the case is that the formatting string fits into memory, the data being formatted fits in memory but the output formatted text does not?

This seems like an extremely rare case, which is itself a subset of the very rare case that you are formatting something that can be arbitrarily large. Oh, and the programmer must somehow have overlook this and not written it formatted output to output in reasonably-sized chunks.

Meanwhile, having back-and-forth between the formatter and I/O means the formatter must correctly handles all the I/O problems and corner cases: disk full, disconnect, transient failures, ... not even going into that for some of them, you might want the caller to decide the outcome, which is easy when you split formatting and I/O, but hard when the formatting performs the I/O.

I think this is a case that it may sounds superior to mix both, but it is actually buying next to nothing but both complicates matters and probably hides or ignores real issues.


> So, the case is that the formatting string fits into memory, the data being formatted fits in memory but the output formatted text does not?

It takes a minimum of twice as much memory to store the original data and its formatted stream. Some systems don't have much memory, and some systems work with large amounts of data. Some systems work with more data than fits in RAM, or more data than even fits on disk, or even infinite streams of data, from sensors or generated data. All of these use-cases must be accommodated by Hare.

That said, you can easily format into a dynamically allocated string in Hare if you so desire:

    let s = fmt::asprintf("Hello, {}!", user);
    defer free(s);
This has the obvious downside of requiring an additional memory allocation, which can also fail just as easily as I/O can, and must be freed after use. However, this function does not require I/O, so all of the I/O failure cases are ruled out. There are trade-offs between these two approaches. Hare's design is meant to let the user evaluate these trade-offs for their use-case and make an explicit, planned decision regarding their needs and potential failure modes.


It might be worth putting a short note on this into the library documentation so people understand the library's design objectives. Among other things it will help them find other things in the library.

Incidentally this is one of the things that the limited form of laziness found in Python generators could help with, if Hare had it; you can think of an input stream as a lazy string. Unlike full laziness, generators have very predictable memory usage.


Note that this explains why string formatting is inherently tied with the I/O layer, but not necessarily why `println` (which is a very specific I/O operation) should be in the `fmt` module. That one is more subjective and probably only for the convenience.


Do you mean why you can write (untested)

    fmt::println(x, y, z);
instead of (also untested)

    fmt::fprintf(os::stdout, "{}{}{}\n", x, y, z);
? I mean I agree that presumably anything you could do in the first form could be done in the second form, less conveniently as you say, but if you omit println from fmt then all the Golang programmers learning Hare will be surprised about the missing stair they expect as the Hare counterpart to fmt.Println. And I think omitting the terminating newline in your format string is a common enough bug that it's probably worthwhile to include a separate function that adds it implicitly, especially in the case of fmt::print, which doesn't have a format string to add it to.

Even without Golang experience, if you're going to have an fprintf and a println, I'd look for the println in the module that has the fprintf in it.


Well, I just meant `println` should be probably in `io` rather than `fmt`---it is a great convenience function to have.

Also not all streams are equal: stdio is heavier than file streams, which in turn are heavier than memory-backed streams (for example, stdio will probably need a built-in lock while string streams needn't). If you only need for example memory-backed streams you want to never see any code related to stdio, which might not be possible in some designs.


"io" cannot depend on "fmt", since "fmt" depends on "io" (technically, dependency cycles in Hare can be solved, but it's messy and strongly discouraged if avoidable). We did previously have a simple io::println which accepted strings but did not do formatting, but it's not super useful and fmt::* does it much better (and it was unbuffered, because bufio depends on io), so we ended up removing it. I thought about keeping it but making it use vectored writes to get around the lack of buffering, but in the end it's not really worth it.


What do you mean by "vectored writes"? I thought you meant "invoking a function pointer" but io::stream already does that.

Why is io::handle a separate type from io::stream? Is it just an efficiency hack to avoid an indirection through a function pointer for the common case where what you're writing to actually is a file? Can't io::handle eliminate the entanglement between buffered output and users of streams like the old io::println? I feel like there's something I'm not understanding here. (Is fd: int really the right way to handle a pointer to a structure representing how data is being transparently compressed into a zipfile, or a terminal emulator state that is being updated by writing bytes to it?)


By vectored writes I mean the writev(2) syscall, which can write from multiple buffers in a single syscall (offsetting the performance loss from unbuffered I/O, in theory).

io::handle is separate from io::stream because it needs to store either a stream or an io::file. io::file is necessary because it's required for many operating system constructs - a TCP socket is an io::file and cannot be an io::stream, for instance. Only io::files can be passed into syscalls like poll(2). However, stream is separately useful for building userspace I/O primitives, such as io::tee, cryptographic hashes and streams, and so on. So io::handle allows you to have an I/O object which is either a file descriptor (io::file) or a userspace stream (io::stream), but your code doesn't have to care about which it is.


Ohh, I see. Yeah, on rare occasions writev() is worth the hassle, but it doesn't really get around the performance loss from unbuffered I/O; if you're calling println(f.size, " ", f.date, " ", f.filename) in a loop that overwrites f 100 times, writev() reduces that from 600 syscalls to 100, but bufio reduces it to 1.

I do understand why io::file needs to be separate from io::stream. (The io documentation introduction gives an explanation of what you're explaining above; I'd additionally offer the examples of a gzip stream, a UTF-8-decoded stream on an ISO-8859-1 text file, and maybe a stream that feeds into in-process terminal emulator logic.) I was asking why io::handle does. If you're writing code that takes an io::handle, generally the code cannot rely on the fact that io::file can be used for select() or ioctl() or getpeername(); if it needed to do that, you would have written it to take an io::file, not an io::handle. So, if your code is only going to invoke io::stream-like operations on the io::handle, it would be simpler if its argument were an io::stream instead of an io::handle.

But what if you want to give it an io::file? Well, it's easy enough to wrap an io::file in an io::stream that just invokes the appropriate rt operations, and in most languages the only per-call cost of doing that is that your function call is indirect (mov 12(%ebx), %ecx; call %ecx) rather than direct (call Zn#]io31337write). In fact, most code would be more* efficient that way, because right now if someone gives you an io::handle and you want to read from it, you're usually going to call io::read on it, adding an extra level of function call around the indirect call, because that's simpler than duplicating io::read's conditional call to rt::read in your own code. But dynamically most read and write calls will probably be on a bufio::bufstream (or some other io::stream), so io::read is just going to call .reader().

Does io::handle maybe exist only as an optimization for sendfile()?

Maybe it's too late for such changes, given the amount of existing Hare code, and this qualifies as bikeshedding, and if so, I apologize.


>Does io::handle maybe exist only as an optimization for sendfile()?

io::file exists for where it's needed to interface with the host system. io::stream is needed for userspace streams. io::handle is needed so that code which just wants to do I/O but doesn't care which of these two paradigms is in use can be useful for both.


Why can't that code just use io::stream directly then? Is there some obstacle I'm not understanding to wrapping an io::file in an io::stream?


I meant "indirect (mov 12(%ebx), %ecx; call *%ecx)". Oops.


Yeah, it would be a reasonable design for fmt to contain no direct dependency on io, just on the stream interface. And that would enable you to put print and println (as opposed to fprintf) in io rather than fmt without creating a circular dependency between modules. And it might be a convenient way to remove unneeded dependencies on embedded targets (though I don't think they're targeting those). But it would be a significant departure from the Golang interface.

I don't remember if Golang "stdio" has built-in locks. Hare doesn't seem to have threads or locks, o that may be a non-issue.


FWIW, Go's fmt package does the same: https://pkg.go.dev/fmt


link without tracking: https://godocs.io/fmt


I'm certain there is room for a better C, but simple is in the eye of the programmer. I would argue that Scheme is obviously simpler, but most people don't want to write system software in that.

I suspect "simple enough" is a threshold/constraint, but not really the goal.


It seems quite similar to Zig, including the error handling flexibility and memory management. But Zig also has a good concurrency story via its “colourblind” async/await support, although concurrency seems to not be a Hare design goal.

Is there something about Zig that rubs you the wrong way? Honestly, Hare almost seems like a Zig subset.

Edit: I respect your technical skills and sheer volume of output a lot so not trolling here, just trying to gain clarity after reading the language introduction.


Yes, I tried to work with Zig long before Hare was even an idea. I had hoped that Zig would fill the hole in the ecosystem that Hare is designed to fill, but ultimately I felt that I needed to write Hare. I think that Zig is far too complex and essentially unbounded in scope. A language to replace C needs to be as conservative as C, if not even more considering the coming changes from C2X.


Thanks. I agree that Zig is more complex than advertised. By the way, good job on Hare’s documentation, pretty impressive for an early-stage language.


Thanks!


how long has it been since you've checked it out. I'm no systems programmer and the drawing quality of the language to me was being able to jump into stdlib symbols and actually understand the code


I revisit Zig often.


I wonder what are your plans regarding the build system/package manager? Specifically, do you plan to roll your own language-specific tooling (with some support for building C dependencies)?


We're leaving package management in the capable hands of distribution packagers.

https://harelang.org/distributions/


"Hare programs are statically linked. We know you’re not a fan of this. We’re sorry."

Absolutely hilarious (and thanks for making this decision!)


Do you believe that Hare will make it into the next Debian Stable release?

That should be somewhere in summer of 2023, if we go by previous release dates.


It's hard to say, I'm not sure.


That layout is so smooth.

Smart choice for packaging. Keep it simple should be a rule, not the exception.


Yeah it's a bit weird for the blog to call out all these compelling ideas but then have no mention (either there, or on the home page) about what those ideas actually are?


The only languages Hare can seriously compete with are the other, infinitesimally less niche Zig and maybe Nim.

C coders are defined by having seen a thousand languages go by and passed on all of them. C coders like C to the exclusion of all else, or they would have abandoned it long ago. Hare will not be picking up any substantial number of C coders.

Hare will not be picking up any C++ or Rust coders. It is a huge step down, offering literally none of what makes either language compelling for its users.

Likewise, Lisp and its offshoots. And Haskell, Erlang, MLs, APLs, Smalltalks, and Adas, all themselves niche.

Hare will not be picking up any Forth coders.

Zig is much more mature, and will maintain its lead. Nim is more mature, but will continue trailing Zig. Hare might chase after Zig alongside Nim.

The normal fate of any new language, absent The Miracle, is to fizzle. It is the certain fate of any language that brings nothing compelling to the table. Hare is exactly such a language.

A few people may continue using a fizzled language, indefinitely, like people maintaining their DVD collection. But there will be no reason for others to pay it any attention. The overwhelming bulk of the value in any language is network effects, and a fizzled language has none.


I agree with what you say, except for "C coders like C to the exclusion of all else".

I doubt very much that there are many C coders who like the C language.

I had liked very much C around 1990, when I could use for the first time the Microsoft C compiler and the Borland Turbo C compiler.

Previously I did not have access to programming languages better than Cobol, Fortran, Basic or Pascal. In comparison with any of those, programming in C was much more enjoyable.

However, later, after better languages became available, I had no reason to continue to like C.

Despite that, I continue to program frequently in C, because there are still cases when it remains the best choice, for various reasons having nothing to do with the quality of the language, i.e. mainly for embedded computers or kernel drivers.

In any case, the C language will always remain on the list of languages important in the history of programming languages, independently of how many programmers happened to use C or to like it, because of a couple of valuable innovations: the "continue" statement and the distinct operators for the McCarthy "and" and "or" and for the bit string "and" and "or" (though it would have been better if single characters would have been used for McCarthy operators and double characters for the bit string operators).

(A few other innovations that are sometimes attributed to C had actually been introduced in the language B, the predecessor of C, or in the language BCPL, the predecessor of B, or in the language CPL, the predecessor of BCPL.)


> (though it would have been better if single characters would have been used for McCarthy operators and double characters for the bit string operators)

(It probably is better on net, but) not as much as one might think, actually. It's a clear win in terms of Huffman coding, but there's also a general principle that small infix operators have higher precedence, and large ones lower precedence, like how multiplication in math notation uses "·", or more often nothing at all, while addition uses "+". (It also shows up in natural languages; consider "Tom and Dick and Harry as well as Alice and Bob".) It's (probably, usually) worth subverting for assignment, because "=" is used a lot, but I'm not sure if it's a good tradeoff for boolean operators, eg:

  if(x == 3 & y == 5) foo(); // look's a bit too much like it's saying
  if(x == (3&y) == 5) foo(); // ie (x==(3&y)) & ((3&y)==5), rather than
  if((x==3) & (y==5)) foo();
Though, like "=", the initial misleading hueristic might wear off with sufficient exposure, and you can usually fudge it by adding spaces around the low-precedence operator.


Your examples would have looked much better and without suggestions of wrong precedence if C would not have also replaced the Algol operators ":=" and "=" with "=" and "==".

In my opinion this is one of the greatest mistakes of C, and which unfortunately has been inherited by too many other languages. Even Dennis Ritchie has admitted that this might have not been a good idea.


> if C would not have also replaced the Algol operators ":=" and "=" with "=" and "==".

That wouldn't help "!=" and "<=", and "=" is used far too often (in typical code, hence the "(probably, usually)" above) to be multi-character.


That is due to the limitation of the source code to ASCII, which I consider obsolete.

"!=", "<=" and the like were single-character operators in Algol and in many other early programming languages. They have been replaced with 2-character operators only because IBM and most other important US computer manufacturers were not willing to support character sets with enough characters for covering the needs of mathematics and of other languages than English.

The target for the US character sets was only the text that may appear in commercial letters written in English. For any other uses, like programming languages, characters have been accepted only when they could occupy one of the few vacant places in the character map.

The dominance of the US-made computer hardware forced this ugly limitation upon the programming languages. So the programming languages had to use various commercial characters, e.g. !, @, #, $, %, instead of more appropriate traditional mathematical symbols.

Now, with Unicode and UTF-8, I consider the use of ASCII for source code as stupid.

In Unicode, also the ":=" of Algol exists as a single character.

Regardless of how operators are encoded, as single- or multiple-characters, a good text editor for source code allows one to change the correspondence between pressed keys and the inserted text.

Because assignment is used more frequently, even when using ASCII a text editor can be configured to insert ":=" when you press "=", and, for example, to insert "=" for ctrl-= and "!=" for alt-=, in order to minimize key presses.


Many early programming languages were even more limited than ASCII itself, for the sake of portability. In Algol itself the operators and keywords were defined as abstract representations, with multiple possible ways of serializing those to computer-encoded text.


Agree.


It seems like it would ne more accurate to say that each C role is has skipped other languages, and thus will likely skip Hare as well. The feelings of people in that role might not have anything to do with it.


It is hard to find anyplace where you may have C but not C++. Do people still use PICs? I think 32-bit ARM chips may be had for 3 cents nowadays. For a really memory-constrained target, you might choose to turn off exception support.


They do, but personally I would be pushing for Pascal in such cases.

https://www.mikroe.com/mikropascal


Pascal lacks destructors and templates. No dice.


True, although in the context of PIC programming hardly matters.

For me no dice is being unsafe by default.

Thankfully in the context of C++ I could always fix, what from my point of view, are bad defaults regarding bounds checking.


Pascal is unsafe by default: it allows null pointers, and stale pointers.


True, but it has a much safer story than C in everything else.

C++ was attractive to me in 1993, because it offered me the safety, and modern language features I was used from Turbo Pascal and then some.


They exist, even to detriment of modern C, C89 + language extensions of their favourite embedded compiler is all they care about.


Zig is kinda awesome though, i'm not even sure what the C community thinks needing improvement with Zig, i often only hear praise with Zig.

Mind you i don't know, i like Rust - but i'm not a C-like programmer and don't want to be. But, nevertheless my opinion of Zig seems quite high. It seems Zig is the benchmark in this space.


One of the more interesting use cases I have seen for niche compiled languages is malware development. When the binary is a sort of martian to the AV scanning engines, it flies under those radars.

There's even old languages dug up for this purpose. Most pascal I have seen lately is malware samples.


Oh the woes with that. Try maintaining a legacy VB6 app. The cobol of the 90s. If you never worked in this field, you won't believe how many VB6 apps are still out in the wild, sometimes the glue keeping everything together for a multi million dollar company.

Write any trivial program in vb6 and upload it to virus total. Red alert left and right. Even Windows Defender of all things goes off for anything a little more involved. Sometimes only a couple of days after you did the initial scan. Signing the binary helps, but is no silver bullet, and sometimes companies still can't be arsed. There's an endless thread in the last remaining forum about vb6 where people try to figure out what increases chances of detection, and it mostly reads like cargo culting.


Given how much malware was written targeting the fact that you could embed it in something like a spreadsheet with full access to windows API stuff it was doomed to suffer eventually like that. The entire security model was non-existent and it just opened up such a can of worms. The funny thing is how unlikely it is someone will get hit with working malware written in vb6 today because of AV engines and how easy it is to slip under the radar just doing it in go or whatever


Given the languages all produce native instructions which then do the same job, is there really a huge difference for malware scanners? They for sure wouldn't look at debug symbols, and hope malware authors were kind enough to include them.


Weird ABIs can certainly screw up automated malware analysis, but (AFAICT) Hare doesn't have a particularly weird ABI (I believe they're aiming for low-overhead interoperability with C, which precludes completely alien ABIs like Go's).

That being said: automated malware analysis is of dubious efficacy and flexibility to begin with, so who knows.


> Hare aims to be successful within its niche for the programmers that find its ideas compelling, and nothing further. If you are using and enjoying C, C++, Rust, Zig, or any other language, and don’t find Hare’s ideas all that interesting, then I encourage you to keep using these languages.

I think they DON'T want to compete with any language nor attract C/C++/Rust/Zig/Nim/Forth/<name_your_language_here> coders.


So why does it exist?


Why not? They are not forcing you to use it.. If you don't like it, how hard can it be for you to ignore it?


Why coders code?


I'd like to know, too!

From my point of view, all languages which don't add anything new and improved are a drag on our collective computing experience.

I'd be happy if humanity had like three or four programming languages: Idris or some other better-Haskell (for guarantees), Lisp (for simplicity), Rust (for performance), and probably one or two other languages which are unique enough, Prolog or something, K?


> From my point of view, all languages which don't add anything new and improved are a drag on our collective computing experience.

I couldn't disagree more. The more people writing languages and implementing standard libraries the better.

Are these minor languages going to be used in production? An even smaller minority of them, sure.

For the majority of them it's a chance for developers to relearn data structures and algorithms and compilers. Real, hands-on practice that is hard to beat with any other method. I only got decent at algorithms and data structures by implementing languages and standard libraries.

I'd rather every developer built their own language.


I agree with everything you wrote! And yet... the endless myriad of programming languages which don't bring anything particularly new is still annoying me. Perhaps I'm just getting old :)


I think it exists largely because Drew DeVault likes making his own things from scratch and also because other people like his opinionated software development. People wouldn't throw him money for stuff like sourcehut otherwise.


> [A]ll languages which don't add anything new and improved are a drag on our collective computing experience.

If people weren't experimenting with weird esoteric languages, how would we have gotten any of the languages you list here? If I'm recalling correctly, K is descended from APL, a language you needed a special keyboard even to use. Rust synthesizes ideas from lots of small academic languages no one will ever ship software on.


I'm not against esoteric languages at all! (as long as they bring something conceptually new/different :)

I guess my main gripe is the duplication of effort: same thing with a slightly different spin, the time wasted learning different libraries doing fundamentally the same thing, etc.


I think you are correct for a large portion of this, however, I will remain hopeful for a different viewpoint winning in the end. I tend toward evaluating a language's usefulness exactly in respect to the individual using it.

I understand the network effects and 'community' benefits of large/non-fizzled languages, but I want to believe that a language that makes an individual programmer happier, more productive, better able to meet engineering goals, or whatever metric they seek to increase through choosing one language over another, is the language they should be using.

I remain hopeful for a heterogeneous approach to language usage in the future, where multiple engineers working on those applications can utilize a language which they determine is best for them. I think this could results in multiple languages which all compile to a well defined target language, via well specified semantics for the source language transformation. I know this is almost the idea behind the JVM or CLR, and is similar to many languages transpiling to C. I would like to see an environment cultivated to support this from the ground up and grow from there.


Organizations would deeply resent being saddled with code in a dozen different languages for one application. When some bit of functionality straddles two parts in two different languages, they need somebody skilled in both languages to work on that. Even for a free-software project, the chief maintainer really needs to understand all the languages used in it, and the number of people skilled enough to contribute to a part will often be severely limited, if one of the languages is obscure or if it needs work in two or more.

Thus, there are very powerful organizational forces pushing for a single implementation language. A practical exception to this is that there are plenty of programs coded in a systems language providing a scripting language (e.g. gdb in C++ with Python scripting, Emacs in C with Elisp, Vim in C with Vimscript) made tolerable by the scripting language being extra-easy, very widely known, or known to all users of the program, so familiar to any contributor.

But the need to learn another language, or two or three, just to contribute to a project will turn away people not able to spare that much attention. Businesses would need to pay for the time all the developers need to learn all the languages. And, languages will be at multiple levels of maturity and stability. What do you do when 10% of your system is coded in a now unmaintained language? When you want to port to a new target execution environment, does each language not ported there get to hold you hostage?

This is why languages with a wide range are attractive. C++ and Rust aim for this wide range, providing for both bit-twiddling optimization and system architecture organization. Supporting a wide range is a big burden for a language maintainer, so there will never be very many like that.


People don't just use C because it is the most fun to program in but because it offers a unique set of trade offs that no other language (even most C-replacements) hits.

It is based on a stable standard.

It is extremely portable and not in a target major platforms way - that is much better done in other languages - but in a being able to run on nearly every OS and hardware in existence. You can feasible port your C software to you weird hobby operating system.

It is simple enough that you could write your own compiler or at least something to bootstrap a proper one.

It is the Latin of programming languages. It is the common, cultural core we have. Nearly all bigger languages have some way to interact with C. If you want to write a library that can be easily used from any language, C is the way to go.

It is (relatively) fast too compile and creates binaries of minimal size that run at maximal performance. (Yes, you do make trade-offs in some of these aspects when using Rust or C++. Those trade-offs are worth it in many cases but let's don't pretend they don't exist. C serves as a performance gold standard for a reason.)

Yes, it is also a horrible language is many ways and should not be used for most projecs but the advantages can make all the difference for certain use cases. For projects like SQLite or Lua it was for sure the right choice to use C.


SQLite and Lua could have been coded in C++ with no downside. They could be coded in Rust with some loss of portability, but nowadays anywhere you can compile C, you can compile C++. There is no difficulty whatsoever in exposing a C API from a library coded in C++.


The point is not whether they could, they absolutely could but whether C++ or Rust would have been a better fit considering the values that these projects have.

Just because you don't care about the downsides does not mean there are no there. First of all, which subset of C++ should be used? It is a huge language that only very few people can claim to master. So lot's of people wouldn't be able to understand or change the code anymore. That is quite the significant downside.

Not to mention all the other downsides mentioned in my other post, like being slower to compile, bigger binary and so on. Yes, those downsides might not matter to you and should not matter for many projects but they do exists.

C programmer often care more about building the best possible software not so much about having the best possible development experience while building.

That said, Rust has much better safety guarantees, so it DOES offer some significant upside that C can't.For more complex software it is is definitely a very good choice.


Now you appear to be parroting things you read somewhere. The statements bear only a tenuous connection to reality.


What exactly do you take offense with?

I do have experience using both C and Rust. Arguably, my C++ is very limited but things like C++ being a relatively large language are well established facts and quite uncontroversial.


There is no disagreement on what "subset of C++" is best: anywhere you have a choice, use the newest method supported on your compiler, because nothing is added to the language without compelling reasons.

It is true that some people resist using anything introduced since they first learned, with a few mired in C. But there is never any need to "agree" on that.

"Bigger binary" is scurrilous propaganda. Compilers nowadays use the same code generator for C and C++: say the same thing, get the same code. Furthermore, optimizers have to guess less at you are trying to do, in C++, so can do a better job. C++ code for the same task is often smaller.

"C programmers often care more about building the best possible software": this is just crude slander.

Finally: "Rust has much better safety guarantees": C++ offers much the same guarantees, where you choose to exercise them.


> C coders like C to the exclusion of all else, or they would have abandoned it long ago. Hare will not be picking up any substantial number of C coders.

I agree. And for that matter, see a lot of the same for Zig too. Hardcore C coders have learned to live with it or have incurable Stockholm syndrome and stay despite of the situation.

I believe it's really a matter of programmers who are newer (younger), casual, or part-time at it (like various people in IT or security) that are looking for more convenient alternatives. In that case, things are more wide open. They could pick Go, Odin, Vlang, Hare... just as easily as they could pick up Zig. There are many "better C" alternatives vying for position. It could shift so easily.

As for Nim, I think they have a certain level of guaranteed "lock-in" for being so Python-like and seen as an alternative to it. How far that will take them, remains to be seen. Same for Rust, they have "safety" to play on, and the backing of Mozilla that added wind to their sails.

> The normal fate of any new language, absent The Miracle, is to fizzle. It is the certain fate of any language that brings nothing compelling to the table.

True.


Hare is C without macros and with added namespaces. Oh, and tagged unions.

With just that, I'd say it's a win in my book. Not mentioning actual, useful strings.


That's besides the point parent tried to make.


Parent was very critical of this very new language.

I just pointed a couple of it's good sides, that make it worthwhile for myself to eventually learn.


I'm not sure it's true that C programmers usually like C? I'm writing C++ code because that's what the Arduino ecosystem uses. It's hardly my favorite language.

It seems like a lot of people choose languages due to ecosystem constraints?


Anywhere the choice of language is constrained like that, by definition the niche language will not be a viable alternative.


> C coders like C to the exclusion of all else, or they would have abandoned it long ago.

Those irrational people seem to have code that doesn't require rewrites/refactoring every few years. Switching to an entirely new ecosystem as a primary field? I leave it to the beta-testers, perhaps in 10 years Zig/Rust/Nim will be standardized languages with multiple compilers and big ecosystems supporting them, just like C and polished enough to be as easy as C to write.


You're not wrong, but I don't think they want to compete with those other languages.

> Part of our work in developing Hare is laying the groundwork for a collaborative, productive, healthy community that people want to work in, ...

Maybe that's enough? It's okay to be a niche, even toy, language. The community is the "feature" that makes it compelling, eh? Like how Gemini doesn't compete with WWW, or indeed how Sr.ht is not exactly a competitor to GitHub, but still has its value.


He certainly does want to compete. He just doesn't want people making fair comparisons.


> He certainly does want to compete.

In what sense? I mean, he's not entering contests with it? (Forgive me for being dense.)

> He just doesn't want people making fair comparisons.

I dunno, I feel like he's been pretty up front about where Hare is different (not to say inferior) to other languages. (I should mention that I'm generally pro-ddevault even though I don't agree with everything he's into. (E.g. I'm not on the Graph DB bandwagon (yet?).)

FWIW, I see Hare as a fun toy (a toy that can do real work through, to be sure.) As long as it and the ecosystem around it can attract people to work/play with it then it doesn't need to compete. (Except in the general sense of competing for time and attention with everything else.)


Very much. C coders think that C++ is not mainstream or mature enough :)


C coders, as a rule, don't think very much about languages, or they would not still be coding C. You see proof of that in languages meant as if to be a "better C" like Zig, Nim, V, and Hare, that omit almost all advances in language design from the past half century, howsoever useful.


It's hard to mature when you never stop growing.


> We designed Hare to be similar to C, and useful everywhere C is useful

Except under proprietary operating systems. It's really not that difficult to compare Hare with C. C works everywhere, Hare doesn't. It's disingenuous to compare languages that aren't even close to the same portability level. C, Rust and Zig can be used to implement any imaginable piece of software, from desktop to server on almost any architecture and operating system. Hare is a niche language for Linux services, so I don't think anyone had any doubts Hare would not replace C.


Yes, except for proprietary operating systems. However, as vast as the gulf is between Hare and Zig/Rust in terms of portability, far vaster is the gulf between both of those and C. It's disingenuous to compare any of these languages with C in terms of portability, as C is by a wide margin the most portable language of all time.

Unlike LLVM, adding a new backend for Hare is a relatively straightforward effort: riscv64 was done by one person in a few months and is only 1,476 lines of code.

And again: the answer to the question posed by the blog post's title is "no".


Is C necessarily more portable than Rust? I think not. Of course, you can target any arch under sun with C _right now_. Rust does not have this yet, but you _could_ do so no problem if you wanted to.


The same argument can be made of Hare, but the reality is that closing the portability gap between C and any other language is a monumental undertaking which will take decades to complete.


> I am even more frustrated with the moral crusaders from languages like Rust, one of whom went as far as to suggest that I should personally be criminally prosecuted if some downstream Hare software has a use-after-free bug.

That is likely just trolling, but is also specially obnoxious considering that Rust does prevent use-after-free bugs as much as Java does (i.e. a lot, but definitely not all).


That comment is definitely out of line and trolling, but the author's attitude towards safety and security is still incredibly bad. Two wrongs don't make a right. I'm dismayed to see more new languages copying the safety and security features of C (i.e. nothing).


Hare has significantly more safety and security features than C. Bounds-checked slices, no uninitialized data, mandatory error handling, nullable pointer types, and others still. What it lacks that Rust users object to is a borrow checker.


You don't need a borrow checker -- there are many ways to avoid use-after-free bugs. They don't in Java, or Haskell, or Python, to name 3 languages I work in sometimes.

However, I really do think for a new "systems language" nowadays, you do want to look at how major security holes occur in practice, and have a good story on how users should avoid them.


> Java, or Haskell, or Python

These are all GC'd languages that run bytecode on an abstract virtual machine. They avoid use-after-free by just not-freeing, if necessary at the cost of leaking unbounded amounts of memory.

This doesn't invalidate the point that borrowing isn't the only way to solve this problem, but there are definitely classes of these bugs for which Rust's borrow checker is the only known production-ready solution that still has manual, deterministic memory management.


Your point about GC stands, but I will note that Haskell is a compiled language.


And Go and native-image (GraalVM Java compiler) and Poly/ML (on x86) and MLton and OCaml (on x86) and Chicken Scheme and SBCL (and ECL).


We do take security pretty seriously with Hare. To quote our crypto module's introduction as an example:

> Cryptography is a difficult, high-risk domain of programming. The life and well-being of your users may depend on your ability to implement cryptographic applications with due care. Please carefully read all of the documentation, double-check your work, and seek second opinions and independent review of your code. Our documentation and API design aims to prevent easy mistakes from being made, but it is no substitute for a good background in applied cryptography.

We have many safety features built into the language and the standard library is designed to be difficult to use incorrectly. I will address these concerns directly in a subsequent blog post covering the safety and security features of Hare.

The main problem is that some programmers view anything less than what Rust provides as morally unjustified.


When you say you take security pretty seriously, and mention the Hare crypto module, are you talking about the crypto module which silently falls back to storing secure data on the heap when Linux keyctl is not present on the platform?

https://lwn.net/Articles/893327/


Yes. As I reiterated many times in that thread, having your data stored in the heap does not introduce any immediate vulnerabilities, and this behavior is thoroughly documented in the standard library.

It is possible for two people who both take security seriously to come away with different take-aways. Once some CVEs are found in Hare you might have some fuel for your argument, but until then it's just speculation.


Because rust inherently treats humans as fallible to a large extent. Which all of us are.

That introduction is simply an appeal to "I can write safe correct C." with more words. Which obviously is not true.


> Because rust inherently treats humans as fallible to a large extent. Which all of us are.

No it doesn't. It has a bypassable compile-time verified lifetime system, not an infallible programming guarantee. It leaves it entirely up to the developers to write correct and safe applications and libraries, and merely provides (powerful) tools to help.

(I realize this context was in avoiding common security bugs which are usually less likely in memory-safe languages, but it's important to not overstate the benefits.)


I think Rust kinda has a marketing problem: the myth that "Writing in Safe Rust automatically makes your code memory-safe". It doesn't (well, it does most of the time but it isn't guaranteed), but it rather defers the responsibility to other low-level system programmers writing Unsafe code behind the scenes. And oh boy they have a fuckton of responsibility... Stacked Borrows along with various sanitizers can help when writing unsafe code, but it isn't perfect. I highly recommend anyone trying out Rust for the safety guarantees to take a look at the Rustonomicon (https://doc.rust-lang.org/nomicon/), which debunks a lot of the misconceptions around safe/unsafe Rust.

In an ideal la-la land world, there exists an abstract interpreter that can consume safe Rust code and does not enforce any contracts upon the programmer (and hence will never have any undefined behavior). However, real world hardware definitely has contracts which developers have to obey (manually! because of the constraints of actual semiconductor physics! no compiler hand-holding here!). And on top of that all major OSes (Windows, MacOS, Linux) are written in C (so you need unsafe FFI to interact with the OS).


There seems to be a perception among Rust programmers that C's status as a defacto standard came to be in spite of C's contradictions, rather than as a result of them. Skimming the Rustonomicon just now left me with the impression that at least one Rust person gets it. Though it still seemed as if the author felt the need to choose their words very carefully, lest they "make the memory model people angry".


The memory model people are intense but really quite friendly :) Here's a great recent post: https://www.ralfj.de/blog/2022/04/11/provenance-exposed.html


exactly. if i'm writing safe rust and encounter memory safety issues, their origin is with my dependencies, and my responsibility is limited to having chosen such dependencies.

In practice, this makes vulnerabilities in eg. argument parsers (like the recent "baron samedit" vulnerability in sudo) incredibly unlikely.


> the myth that "Writing in Safe Rust automatically makes your code memory-safe"

Of course you're right that that isn't true as stated, but I think it's interesting to try to situate this point along a continuum of other similar points:

1. C with Valgrind and sanitizers isn't always memory safe, because those tools are limited by test coverage.

2. Python isn't always memory safe, because many libraries including the standard library call into C code.

3. Pure Python that doesn't call into any C code isn't always memory safe, because the interpreter might have bugs.

4. Provably correct Ada with a provably correct compiler isn't always memory safe, because the proof checker, the compile-time hardware, or the runtime hardware might have bugs.

I think we all agree that there are important differences between 1 and 4, beyond the simple fact that the defects get less common as you go down the list. Here are some things that stand out to me:

- In cases #2 and below, the application code isn't "at fault" for any memory unsafety that comes up, and whatever code is at fault can be fixed to restore memory safety without changing the application.

- In case #1, there's no clear boundary in any sense between "safe code" (which we know isn't at fault for memory unsafety) and "unsafe code" (which might be at fault). There may be a distinction between code that's well covered by tests and code that isn't, for example, but it's often not easy to tell which is which. In case #2 and below, the boundary is pretty clear.

- In case #1, the amount of "unsafe code" in an application probably grows linearly with the size of the application, or maybe we just consider the whole application unsafe. But in cases #3 and #4, unsafe code is confined to low-level dependencies that get a lot of "battle testing" compared to how much code is in them. Case #2 is kind of a gray area, and we need to look at what dependencies the application is using.

So where should we situate Rust in that continuum? Is being able to write unsafe Rust code more or less risky than being able to call into C? It's certainly a lot more convenient to write an `unsafe` block than to cross the FFI barrier, and maybe that convenience is dangerous. On the other hand (contrary to some common misconceptions), unsafe Rust still benefits a lot from the borrow checker and other safety features, and it might end up having a lower rate of defects for that reason. Maybe it's too early to tell?

But anyway yes, I totally agree that the Rust community has a hard time getting the messaging right about how safe code and unsafe code work. But even though this discussion is really important to Rust, I'm not sure it's a "Rust problem" per se. I think it's actually quite difficult to talk clearly and correctly and precisely about memory safety in general.


http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

"use-after-free", "data race", "memory corruption", "out-of-bounds write", "read from uninitialized memory", "execute arbitrary code"...


This list is actually quite interesting for a few reasons.

One is that a large fraction of these security issues are not real, potentially exploitable vulnerabilities, but merely the fact that it is possible to abuse an API to subvert Rust's safety guarantees. These are things would, by the standards of other programming languages, not be worth even reporting and be considered user error.

The other is that a surprisingly large fraction of these are not from regular unsafe Rust code, but from misunderstanding the guarantees a C library makes when creating bindings for it. This is to be expected, as fully understanding those as a library is pretty difficult.

A total of one memory safety issue reported for an entire ecosystem this year so far also seems pretty good.

All in all, I think these are both pretty promising signs that the safety guarantees Rust provides working as intended.


Wait, that's all? On the entire Rust ecosystem those are the only ones found?

Everybody already knows Rust has unsafe blocks and C FFI. It is not invulnerable to those problems, Rust just makes it very clear where those problems may appear, and if you are smart, you will place most of your code outside of those regions.

Looks like Rust is much safer on practice than what I expected.


Note that unsafe does not contain anything. The problem propagates to the unsuspecting caller of claimed safe code.

Ending up compromised by a problem in tokio, Pin semantics, actix or all the necessary ffi bindings is no different than, say, a C program being compromised by a vulnerability in OpenSSL or libcurl.

A very significant number of memory issues in C stemmed from issues in such single high-profile dependency, so one should not undermine the threat of a bit of unsafe code in the corner of a library.


Not being perfect does not translate to not being better. Being safe by default and having compiler-enforced safety as a top design choice is great.

Rust is better.

It's very much human nature to trace the line in the sand juuuuuust right behind one's heels though, depicting everyone behind as bad and everyone ahead as zealots.


I did not say rust was not better. I said the statement was false, based on a misunderstanding of both the benefits of Rust and the problems of C many if which Rust is not immune to.

Rust is definitely better, hands down, but insisting on thinking code that interacts with unsafe blocks can be "safe by default" is a dangerously wrong mindset which also makes unsafe blocks proliferate without the necessary caution as the problem seem "contained". Anyone remember the actix unsafe saga?

But even though a program with unsafe blocks (read: all rust programs) are by definition not memory safe - calling a language memory safe on current platforms can to some extend even be considered a misnomer - the assistance provided by rust by default certainly helps make such programs much safer.


Yes, unsafe marks the places where your code must be correct no matter what. If it's wrong, the problems can appear anywhere.

That's very different from environments where all code must be correct no matter what. But the impact of bugs isn't what changes.


Well... if you consider the proportion of "lines of code (or projects) ever written in the history of a language" over "security issues found" then Rust will be probably losing.


A lot of those bugs are in C code...


You don't know what you're talking about re: security lol


I can assure you that Hare takes security more seriously than assuming the programmer is smart enough to do it right. I don't appreciate shallow takes which make unsympathetic judgements on the language which are not based in any understanding of how Hare actually works, and I've heard nothing but such takes for a week.


How does Hare prevent use-after-free, then? I'd welcome an explanation.


Hare does not prevent use-after-free, but it does prevent many other kinds of bugs which are common in C. I will go into greater detail in another blog post.


Rust has compile time checks that avoid one category of bugs out of many. It’s “safe” in a very narrow sense.


And that category of bugs (memory safety) is only avoidable if the unsafe code itself doesn't have undefined behavior (which responsibility is left to the programmer rather than the compiler). If unsafe code is compromised then it's still game over (hence the recent development of various tools/methods like Stacked Borrows in MIRI that checks potential errors outside the borrow checker, as well as various guidelines for developers to write safer unsafe code)

Safe Rust cannot ever cause undefined behavior, but Unsafe Rust can. The ultimate merit of Rust is that when you suspect any undefined behavior you only need to check the unsafe part, which is a much smaller percentage of your codebase (as opposed to C/C++ where you need to check the entirety of your code)


It is not enough to check your unsafe code for UB, you also need to make sure it does not violate the invariants Rust relies on to prove the safe code safe.


...which I consider as one of the bullet points in the list when checking UB in unsafe code.


????

Whatever that means lol

You mean the borrow checker? People are working on formally proving that, and have already done so for large subsets of the language.


They mean that in unsafe code, you have to adhere to some rules to prevent safe code from becoming unsafe.

In other words, incorrect code in "unsafe Rust" can cause safety issues that only appear when you use it in a certain way from "safe Rust".


a very narrow sense representing 70% of all security vulnerabilities at microsoft and google (self-reported). i'd say it's a class of vulnerabilities worth eliminating, especially when the "cost" is getting a competent and standard package manager and a general focus on correctness that ultimately increases developer productivity and ergonomics (compared with C++, IME)


I didn't mention Rust (beyond mentioning you discussed it). You seem very Rust obsessed.


Honestly, I don't blame him, after the onslaught he's been defending against in every other thread since Hare's launch. It's really embarrassing to see that side of the Rust community acting this way.


You should be implementing a borrow checker or something like it. It's irresponsible not to do that. I'm serious about this. We know how to totally stop most use-after-free bugs during static analysis now, this is a tool that can be implemented in any language, so people should just do it. If you ask me the status quo moved a long time ago. This has nothing to do with Rust.

Also I was wrong before and you were out of line. Matthew wasn't trolling, he never said you should be held criminally liable. You just made that criminal part up for no reason. Anyone should be held socially liable and shamed if their project has bad security and they refuse to fix it after they knew about it. I think you would even agree with that.


I think “incredibly bad” is overstating things quite a bit. Safety isn’t an all-or-nothing game. If it were, Rust would be useless because it’s not Ada or another formally verifiable language.


That seems to be a reference to https://lwn.net/Articles/893346/ - I'll let people come to their own conclusion about whether Drew's framing matches that, but I will say that I've never written a line of Rust in my life and certainly can't be described as representing the Rust community in any way. My position here is based on spending my days dealing with the consequences of bugs that we know how to get rid of - I just don't see the excuse for building something that isn't at least as good as solutions that already exist, especially when the consequences are potentially so significant.


> I just don't see the excuse for building something that isn't at least as good as solutions that already exist,

Why does anyone need an excuse to build anything? He’s doing this in his free time. He doesn’t need the internet’s permission.


Hare is presented with the expectation that it would be included in Linux distros. With that framing, criticism on matters that are relevant to distro-included software is indeed relevant.


Where was it claimed or implied that criticism of this project isn’t relevant? My comment points out the flaws in the presumption that independently written software needs an excuse to exist.


If you've never written a line of Rust in your life, then your position could be more informed. I've written a lot of Rust, and see how much one sacrifices to get the memory safety guarantees without losing speed or memory safety. You can't make basic observers, RAII, dependency injection, back-references, or a lot of other useful patterns, without sacrificing speed or safety.


I'm experienced in security and rust and I agree with mjg. I've probably written 100s of thousands of lines of Rust, and I've been in security for well over a decade.

None of what you said is even true, all of those patterns are trivial, except "back-references" which are very slightly non-trivial.

And none of it is relevant. We don't need another memory unsafe language. It's fine if it's a toy, but this obviously isn't. It causes real harm.


Everything I said is true.

They all rely on having a mutable member reference to affect the outside world, which the borrow checker rejects. You can try to make it happen with a generic lifetime parameter for your structs, but you end up making invisible the thing you're pointing at, making it useless.

One can use unsafe to have shared mutability, or sacrifice speed with Rc/RefCell (increments/decrements) or Cell (copying, especially bad when copying Vecs which causes heap allocations).

Basic observers aren't possible. The closest thing we can get is some sort of modified observer-like substance that returns a command, which requires a lot of wiring and incidental complexity. [0] [1]

Dependency injection (the pattern, not the framework) isn't viable for the same reasons as observers: you can't have a mutable reference field without causing a lot of headaches elsewhere.

RAII isn't possible without sacrificing speed or safety. Most usages we see are backed by unsafe (FFI) or RefCell. We can't have multiple objects whose drop() affects the outside world, because we can't have multiple extant &mut references, and we can't just pass them in via parameters (because drop takes none).

Back references aren't possible because of the circular problem (having a mutable reference to your owner means nobody else can read it).

I'm not advocating for Hare, I'm just addressing the "I just don't see the excuse" remark, which can be ignorant of the costs of the borrow checker. The borrow checker is a great step forward, but not always a good tradeoff. An architect needs to be aware of these sacrifices before going all-in on a paradigm.

[0] https://stackoverflow.com/questions/37572734/how-can-i-imple...

[1] https://www.reddit.com/r/rust/comments/pwqju6/is_there_an_un...


You are fundamentally misunderstanding the relationship between Rust's unique and shared mutabilities and the usual mutability in, say, C. The Rust equivalent of C mutability is Cell. Conversely, the C equivalent of Rust's unique references is a `restrict` pointer.

Cell is meant to be applied at the "leaves" of a type where individual assignments take place. When used this way, there is no additional copying relative to the straightforward C version. (Bringing up Vec and heap allocations here is also complete nonsense; Cell has nothing to do with Clone.)

Once you wrap your head around that, all your shared mutability examples translate over to Rust trivially. All the "sacrifices" in speed you are imagining are relative to `restrict`/`&mut T`, not the usual baseline of a simple mutable object.


RAII is built into Rust. DI is trivial and I use it all the time, idk what you're trying to say there. Graphs require Rc, backreferences are trivial with Rc::weak.

idk what tot ell you


You can't use RAII in Rust? What on earth could this possibly mean? RAII is an extremely pervasive pattern in Rust and is fundamental to many of the safe APIs in the standard library.


To clarify, I was talking about making RAII, not using RAII. And it surprised me too, when I learned that the borrow checker rejects it.

To see it in action: Have a Database object, and try to have multiple Transaction objects that might commit something to it, in their drop().

It's unfortunately not possible, because they can't all have a &mut Database as struct fields.

We can sacrifice speed (by using Cell's copying or Rc's counting) or safety (by using unsafe). Most RAII we see uses unsafe FFI under the hood, which is why it was so surprising to me.


Rust is actually right, you cannot have multiple mutable references to a Database object without things going down the drain. (This is related to the fact that, like other comments said, &mut is an exclusive reference).

However, achieving something like what you want is still more than possible in Rust. You can do this with the pattern of 'interior mutability', which in its simplest form is just a Mutex. This allows upgrading a shared reference to an exclusive reference, so that you can safely mutate an object while upholding the expectations that a mutable reference is exclusive, and a non-mutable reference does not change from under your feet.

Of course, for a database, you will probably want a more advanced implementation of interior mutability, so that you can commit multiple transactions at the same time. (Or not, it seems to work quite well for SQLite.)


RAII is a general pattern for tying resource management to the lifetime of objects such that resource allocation is tied to value construction and resource deallocation is tied to value destruction. The smart pointers for allocation in the Rust standard library (Box, Rc, and Arc) are examples of the RAII pattern, since memory allocation happens at creation time (Box::new()) and memory deallocation happens when the Box goes out of scope (in drop()). Another example of RAII in the standard library is File: opening a file means creating a value of type File, and dropping that value means closing the file. Yet another example are the smart-pointer guards used for accessing RefCell and Mutex: RefCell::borrow() returns a Ref, and Mutex::lock() returns a MutexGuard; the underlying value can only be accessed while the guard exists, and access is relinquished when the guard is dropped. Given all this, it's absurd to say that Rust doesn't support RAII — RAII is fundamental to the design of many of Rust's safe APIs.

The very specific API design that you've described is not possible in Rust, but it is strange to equate this with the entirety of RAII. In any case, there are many alternative APIs (some with no sacrifice in speed or safety!) that are perfectly possible in Rust.


i think the real mistake was to have "exclusive references" be called "mutable references" in the language. I've taken the habit of saying "mut" as "mutually exclusive" for references. Of course you can't have each Statement keep an exclusive reference to a db object. They're exclusive!

You need shared references for your DB, implying you need interior mutability. This is how Statements are implemented in real-world rust database drivers such as rusqlite (any operation on a db is done through a shared reference). The fact that a very real package is doing it proves that the pattern you're talking about is, in fact, possible.


I felt that statement, along with some of the language used in the blog is at odds with the author's goal to...

> Part of our work in developing Hare is laying the groundwork for a collaborative, productive, healthy community that people want to work in

The goal to have a healthy community is laudable but it comes from the top and hitting out at others doesn't set a good foundation. I'd recommend rising above by responding to critique without emotion.


There was a prominent member of the software community arguing in the LWN thread that authors of compilers which don't statically reject use-after-free (presumably including C and Hare) should be held liable for the consequences of use-after-free errors. And I suspect this person doesn't think the rustc developers should be sued for bugs written in unsafe Rust. I would argue that this person, not Drew DeVault, is hitting out at others.


You're right, I do understand that. My issue is with Drew DeVault's blog linked at the top, which I quoted. My advice is that if someone is hitting out at you then reply with statements, evidence and not emotional hyperbolic language. As the chief of any project, you set the tone of the project and blog entries like this are counter-productive if one wishes to run a healthy community.

That's solely my point.


Tony Hoare stated that language creators should be responsible for the bugs that users of those languages create. Is Tony Hoare a rust evangelist, or a "moral crusader" ?


Such a policy would create a chilling effect on the creation of practical languages which don't refuse to compile unproven code (every language in wide use).


Exactly!


Assuming the "criminally prosecuted" piece is a reference to that thread [1], based on what other comments have also pointed out, did anyone in there actually say that, or imply it? I read that thread and didn't see anything close to that, there is clearly harsh language in there, such as:

> Look if you can't understand that this is a thing that will happen in the real world and that people will potentially suffer as a result you shouldn't be writing a crypto library.

Which is still far from suggesting someone should be prosecuted.

[1]:https://lwn.net/Articles/893327/


Search for "liable" in the parent thread, https://lwn.net/Articles/893285/. That's the closest thing I could find.


Ha, perfect, exactly what I was looking for, thanks for pointing it out, didn't realize I was looking at a subsection of the thread, my bad.

But it confirms what I was thinking, going from "liable" to "criminally prosecuted" is a pretty big stretch imo.


What's the difference? (for a legal layperson like me)


I am not qualified to make any comments on the legal difference between the terms either, and I should also add that I am an ESL, so take that with a grain of salt.

Reading the original comments about being "liable", it felt like another way of saying "there are consequences to your decisions, and as the author you bear some responsibility of what you put out there", which imo is pretty far from how the author of this blog post described it, hence me calling "a pretty big stretch"


For one thing, "liable" applies to both civil and criminal law, so equating it to "criminally prosecuted" is definitely wrong.


Deleted... thanks for the clarification.


I think GP was implying the person who said it was unethical was trolling, not the author of Hare


Odin, Zig, Jai, Hare, etc - all of these new languages have been mostly inspired by Go, C, and Rust.

So let's summarize:

1. Simplicity and readability - C, Go

2. Tiny language - C, Go

3. Modularity - Go, Rust

4. Defer statement - Go

5. Metaprogramming (generics, compile time, macros) - lots of inspiration and some really fresh ideas like Zig and Jai, Go interfaces, and Rust traits look nice

6. Strong type system - Go, Rust

7. Manual memory management, pointers - C

8. No OPP in terms of C++, Java

9. No references, just pointers

10. Syntax - Go, Rust

11. Zero cost abstraction and as much as possible minimal runtime - C, Rust

Mostly their look like Rust with "defer" but without borrow checker, move semantics, references, RAII, and lifetime annotation.

Mb this is what we really need? :)


Putting Rust and Go together under strong type system is a very odd choice.

Rust is inspired by ML family languages and heavily leans on it's type system, while Go is very simplistic in comparison. (This has improved somewhat with the recent addition of generics)


Yup. For example, in both Rust and Go you can cheerfully open a filename, based on a string you found in some JSON. But the reason why you can do that is very different, and reveals important foundational differences.

In Go the answer is that strings are just some bytes, and filenames are just some bytes and so this naturally just works.

In Rust the answer is that strings are AsRef<Path> and you can open a Path, so when you call open the compiler gives it a Path even though that isn't what you actually had (you can't mutate the Path via this reference, so we know open doesn't change it).

The difference becomes more stark if we go the other way, starting from a list of files in the current directory and writing a JSON file.

In Go if you get a list of all the filenames in the current directory, it's a list of strings, and the fact that those aren't actually text is your problem, you will need to explicitly take care of this or you can't emit valid JSON.

In Rust, you get Paths, and you're going to need to explicitly ask for the strings to make JSON, at which point you have to decide what you want to do if the Path isn't just text, you're obliged to decide, even if it's just panic (ie abort the program).


> In Go if you get a list of all the filenames in the current directory, it's a list of strings, and the fact that those aren't actually text is your problem, you will need to explicitly take care of this or you can't emit valid JSON.

You'll still emit valid json. The encoding/json doc says:

> String values encode as JSON strings coerced to valid UTF-8, replacing invalid bytes with the Unicode replacement rune.


The original filenames will be lost though, if they weren't utf-8.


But to be fair if we're sending them as UTF-8 in a JSON file we can't do anything about that.

I've had situations where I am obliged to write output to XML, and clients are like, "You can't lose this weird data in our text". That's not me, that's XML, in XML 1.0 most of the ASCII Control Characters are banned (not like "You must escape this character" they are banned if you write them as text, escaped or not, that's not a valid XML file and some parsers won't read it). You can either admit you want binary data maybe wrapped as Base64 inside the XML or you can accept that in XML those characters are toast. I would turn all the banned characters into U+FFFD in XML.

JSON doesn't have that defect, but it still can't magically make non-text into text, and some filenames are not text. So this feels fair enough. Chances are if your JSON format is so capable that you can write "Here's a Non-text filename" and have that work almost all the people parsing it ignore that case and you didn't really improve interoperability at all.


Aha, useful. Good catch.


I agree - Go's type system would be perhaps better described as strict. There are rarely cases in which conversion is implicit (not particularly unique to Go, but useful). Between types are aliases of those types? No. Between signed and unsigned? Of course not. Between string and []uint8? No. What about less precise to more precise? Nope. This can be a pain, but overall it avoids some classes of bugs present in languages like C (without getting strict about your compiler flags anyway) and allows you to use types to encode your problem in a way that prevents dumb mistakes.


Especially with how Go handles default values. Suddenly a value wasn't present in a deserialization and now that's a nil pointer. Or if you have a stateful 0 so you can't tell the difference between missing or user choice without a deeply awful extra check.


A couple scattered thoughts:

- Defer is nice for cleanup, but I think destructors are the gold standard there. They're even simpler to reason about, you can't forget to invoke them, and combined with move semantics they automate away a really wide range of cleanup scenarios. Another interesting difference is that adding destructor-based cleanup to an existing type that didn't previously have a destructor is often a backwards-compatible change. Lastly, GC'd languages like Go usually need to include some sort of finalizer mechanism in addition to the defer syntax, and finalizers are surprisingly complicated.

- I don't think C should get full points for zero cost abstractions. There's a lot of pressure to use void*'s or intrusive structures in place of true generic containers like std::vector/Vec, and that comes with runtime overhead in practice.


If you are going to mention Odin, Jai, and Go then you should mention Vlang too (https://vlang.io/) because it's in that category.


You could say it is a Cambrian explosion of non-JS langs, if you will.


Zig should go under small language and modular, imo


12. Explicitness!!!


So hare's target niche is people who want to use hare? On the first blog post and the ensuing conversation here that wasn't clearly stated (at least as far as I can tell). It seems a little strange that the programming language's niche would be "idk why not"... but then again why can't it be?


Hare's target niche is stated on its home page:

> Hare is a systems programming language designed to be simple, stable, and robust. Hare uses a static type system, manual memory management, and a minimal runtime. It is well-suited to writing operating systems, system tools, compilers, networking software, and other low-level, high performance tasks.

I would be happy to clarify further if you're still unsure of what kind of programs are well-served by being written in Hare.


Yeah that's not really a niche as much as it is a description of what Hare is. Maybe what would be more helpful are some examples of use-cases you and the rest of the Hare team have in mind when developing this language.


I feel like we're really picking nits at this point. We're using it to develop these things. There are ongoing projects to make a kernel in Hare, POSIX utilities, a password manager, graphics libraries, and so on. There's a list of ongoing projects here which may make this clearer:

https://sr.ht/~vladh/hare-project-library/

I hope that helps.


That's still a list of projects possible to write in Hare, not necessarily good to write in Hare. At this stage I believe (and if true, hope you to clearly state) that Hare exists in the same realm as Sourcehut, that is, the alternative world you and like-minded people prefer. Many programming languages exist mostly to fulfill creators' ego and yet completely deny that fact, which would be unwise in my opinion.


What I’m missing are comparisons. A few examples illustrating what makes Hare preferable when compared to C/Rust/Zig.


Every language would do us (programmers) a great favor by having a page titled "Pro & Cons" or even "When to use X / When not to use X", and could include comparing other languages as well. Maybe it's hard to sell your language if you have to include cons/when not to use X as well, but it'll help gain trust for sure.

In the case of Hare, it doesn't seem to be good if you want to be able to have anything running on Windows/macOS, as one stated goal is "Hare does not, and will not, support any proprietary operating systems" according to https://harelang.org/platforms/, which makes it very unattractive for me at least, as I move across three platforms daily.


How do they expect to gain traction if they refuse to support 2 of the three platforms software is developed for?


In Hare's defense, not every programming language needs to target every runtime/OS/environment.

JavaScript only targets the browser. Shaders usually targets one graphic runtime. Maybe there is a space for a language that just targets a few OSes, or even just one? Remember that both C# and Swift initially just targeted one platform, so maybe it does makes sense to now have one OSS language for OSS nerds.


While that was true for years, Node.js has made JavaScript viable on the server for quite a while. But I agree with your overall point. I think JS and C# are good examples of languages that were seen as valuable enough to start using in other environments. I love C#.


Well, naturally, with people who use the other one: Arduino. I mean FreeBSD. Or maybe FreeRTOS? Wait, Android. Or NetBSD. Or Wasm. Or JS. Or the JVM. Or the .NET CIL. Or ReactOS. Or SerenityOS. Or FreeDOS. Or Illumos. Or OpenSolaris, heh.

Or just regular old GNU/Linux like 70% of the Web. But either Android installs or FreeRTOS installs easily outnumber those.


I really want to avoid measuring Hare up against other languages. I think it stands on its own merits, and would prefer that people evaluate it in earnest rather than compare bullet points with another language. Nevertheless, there does exist a (dated) comparison with C:

https://harelang.org/blog/2021-02-09-hare-advances-on-c/

And if you have any specific questions that you want to frame in the context of another language, I will do my best to answer them.


If you want people to evaluate it on its own merits, then make those merits stand out. As it stands, your intro paragraph ("Hare is a systems programming language...") is so generic that it doesn't tell me why I might want to use it instead of C or Zig or assembly. Where does it excel? What makes it stand out? Bullet points help me quickly decide whether I want to spend time evaluating a language.


> I think it stands on its own merits, and would prefer that people evaluate it in earnest rather than compare bullet points with another language.

I’m definitely not looking for bullet point comparisons.

Rather, I’m looking for an example of implementing some concrete functionality in both Hare and — say — C, which highlights the advantages of Hare over C in implementing this specific functionality.


Why do you need comparisons. Why can't you just experiment with it and use it if you want to?


Time has value. If the prospect isn't clear, the tradeoff of time to investigate is a waste. People are risk averse and want to understand the benefits the language was designed for relative to competing languages.


To make our lives easier. There is only so much time in a day and we would rather spend it well.


The sibling comment already says most of it, but the non-presence of a succinct comparison table is IMO a signal in itself that the language/project is a toy and not ready to be considered for production use.


I'm not sure I've seen such a thing for most languages, but maybe I'm used to just reading what a lang is about and mentally slotting it for certain things by myself. I bucket things by static/dynamic types, dominant paradigm, syntax style, GC/no GC, and stated goals, which include what it was designed for. Most of those are easy to discover.


Oh absolutely, and that makes sense if the language is being pitched to individuals. But as soon as you want to convince your boss to let you use it for a project, that executive summary level info is essential. They (rightfully) do not have the time to infer it or "play around", they need a clear "it's X but better because Y, Z."


No languages provide any such comparison that I'm aware of. Rust does not, Zig does not, C and C++ do not, JavaScript does not. Are all of these languages toys?


Ha, good call. IMO C, C++, and JavaScript are all kind of in that special case basket where there were significant domains in which you had to use them (JS for the browser, C for unistd.h, C++ for the Win32 API), so they didn't really have to compete with anything for long enough to become entrenched.

Zig, though, does very clearly lay out its pitch right on the homepage, and although it's maybe not a table of checkmarks, it's a series of pretty clear shots-across-the-bow at other languages, in particular C and C++.

Rust similarly has a "Why Rust" block above the fold on its homepage; it's not quite as terse as the Zig one, but it's clearly that same executive-level pitch.

Hare's homepage has: "Hare is a systems programming language designed to be simple, stable, and robust. Hare uses a static type system, manual memory management, and a minimal runtime. It is well-suited to writing operating systems, system tools, compilers, networking software, and other low-level, high performance tasks."

Maybe there's a case to be made here that these bald assertions are no different than what Zig and Rust claim about themselves. But I also think it's reasonable to have different expectations around this for a brand new project vs ones with years of track record and existing mindshare.


There's dozens of languages out there I haven't programmed in. If I were going to learn a new one, I'd have to base that decision on something beyond just vibes.


> I am even more frustrated with the moral crusaders from languages like Rust, one of whom went as far as to suggest that I should personally be criminally prosecuted if some downstream Hare software has a use-after-free bug.

It seems to me that these crusaders (and I've seen a few) think that because you shouldn't build a big bridge out of wood, you shouldn't build anything out of wood. Is the thinking more sophisticated than that? Honest question (and I code in Rust).


I also write Rust professionally, and I agree with another commenter that this is likely trolling.

However, there is definitely a subset of the community that is so bought into the idea that memory safety is an Absolute Good that any new developments, projects, or languages that don’t make it a priority are a priori bad.

I am a huge fan of rust, but we should let the language stand on its own merits and not assume that it is the only valid choice. Just because I and/or my company prioritize the features rust gives us doesn’t mean it’s the perfect solution for every problem.

That said, I do feel like this view that rust is the only valid language is a minority one in the community. It just is a bit loud at times.


I wonder how many of the True Believers know, let's say, five diverse computer languages fluently. I know it's non-zero, but I bet that would cut a lot of them out.

(A "true believer" for my point here isn't someone for whom Rust is their favorite language or their generic first choice; it's someone who gets angry if someone else doesn't choose Rust for some task, and especially gets publicly angry.)

I bet for a lot of the True Believers, they learned some language that isn't very good, like C, or C++-as-taught-by-schools, (which is a very bad language, much worse than C++ as a whole!), or Javascript at its worst, and then encountered Rust. Hey, I get it, that would be a pretty big leap! But you've got a path dependency in your opinions there.

I'd encourage any such person to broaden their horizons a bit. It's OK. Rust really is a pretty good language and you probably won't change your opinion of it much. Plenty of people I know and respect who do know many languages still have Rust as their favorite and general default language. But it is not the only good language in the world, and other languages do offer things Rust does not. Consider trying out the Erlang environment (either via Erlang or Elixir), or Haskell, or Lisp.


C++-as-taught-by-schools, that's a good way to describe it. Now, for a complete beginner, as a first language it's actually not so bad (this was my intro to programming) as long as you realize it's bad. The course structure is basically variables -> loops, conditionals -> arrays -> functions -> references & pointers -> dynamic memory allocation -> i/o files -> structs & classes -> encapsulation, polymorphism & inheritance. Maybe they'll introduce the STL vector, and maybe the smart pointer. After that, you're probably ready for algorithms and data structures: search and sort, stacks and queues and hashmaps and trees and graphs, all taught basically as 'C with classes', the justification being, you need to learn how these things are built under the hood (as if you're going to write C++ libraries for production, which you most likely aren't, but you should be aware, is their argument). Well, it's an education anyway.

Then (now you're about a year in), someone will tell you, you poor thing, you've been abused by being taught that way! Use modern C++ and the STL and never use a raw pointer again. Drop that OOP stuff, learn about lambdas and functional C++ instead. You then read flame war threads about the proper way to do things, which you finally learn to ignore as they're just people with inflated egos throwing things at each other online.

Finally you understand: people write firmware in C because it's about as low-level as you can get without going to assembly, and people write big projects and games in C++ because of all the libraries and the STL and good performance speed-wise, and then there are people who've abandoned C & C++ for Rust because of the memory leak issues and perhaps the convenient build system, and you never ever again write the kind of code you wrote for your assignments in your C++-as-taught-by-schools courses.

Finally, you take a few Python courses and marvel at how much easier it is to code in Python, but you feel a bit better off than those who learned to program in Python, because, you at least know what pointers are. Then someone tells you, hey, learn some Java too, it's easy to get a boring corporate job if you know Java. Anyway, that's what schools are teaching right now in their core intro CS curriculum, more or less.


> Then someone tells you, hey, learn some Java too, it's easy to get a boring corporate job if you know Java. Anyway, that's what schools are teaching right now in their core intro CS curriculum, more or less.

Your description accurate but a little off because most students at least in America learn Java first as part of their AP Computer Science course, and a lot of colleges use Java as an intro language for this very reason.


> Consider trying out the Erlang environment (either via Erlang or Elixir), or Haskell, or Lisp.

Isn't the whole point that Rust is a replacement for C/C++, specifically the whole "close to hardware" and (basically) zero-cost abstractions? If you can afford things like a GC or a VM there are way better languages, that's for sure.


But wood is so unsafe, it can burn, it can rot, you need to put the nails at the correct spot and most woodworkers are dumb enough to not do that correctly.

It should be criminal to build anything out of wood. Heard of forest fires huh? Guess what, they are made of wood!

---

Sorry, I couldn't resist.


Funnily enough, it might be safer to build big buildings made out of wood than steel, as it takes longer to collapse than for example steel (https://www.nationalgeographic.com/science/article/skyscrape...)


I hear witches are made out of wood...


It is, in fact, illegal to build things out of wood. Unless you adhere to a set of best practices called the Building Code.


> most woodworkers are dumb enough to not do that correctly.

Our wood workers have a track record at that.

It's one thing laughing at wooden bridges. The other thing is we already have charred landscape full of them.


> Is the thinking more sophisticated than that?

I think the way it usually works is a less reasonable one would make outrageous statement, later on some one more reasonable sounding would put a context or nuance around it.

So a single crusader maybe called outright troll or whatever but if a few work together they can be much effective for evangelism.


No, it isn't. On the other hand, you are living in a world where web browsers are written in C++. I hope we can agree this is as tragic as wooden bridges.


Browsers have quite complex interactions with operating system runtimes and this is largely a huge pain in Rust, especially on MacOS. Binding crates are outdated, incomplete and you’d spend lots of resources writing (unsafe, crash-prone) wrappers for a shifting target which is work you wouldn’t have to do at all if you used (objective) C++.

Software engineering is more than writing a program, there are economic factors at play too.

That being said, I also wish my browser was written in Rust (and my OS too!).


The announcement mentioned in the article was discussed here¹, along with some interesting user perspective².

¹ https://news.ycombinator.com/item?id=31151591

² https://news.ycombinator.com/item?id=31156298


The barrier to displacing any language is so high that you should consider such an endeavour to be a decades-long effort with a low chance of success. The best you can probably hope for is for something to live in a similar space and being large enough to be viable.

Existing code and existing engineers are a massive barrier-to-entry.

One trap engineers fall into is we tend to view exaggerate the importance of certain problems. Verbosity in Java is a big one. IDEs fill it in for you. It doesn't slow you down. It's a complete non-issue.

Zig is interesting. I don't know a ton about it. My sense is that Zig is to C what TypeScript is to JavaScript. I mean Zig isn't transpiled into C but the point is that it seems to be very closely related so transition should be fairly easy.

Rust most closely competes with C++ (IMHO) but it does something really interesting that C++ just can't do: it tackles ownership and memory safety at compile-time. Yes, C++ has smart pointers but these incur a runtime cost. C++'s features, history and (dare I say it?) baggage mean C++ can't do the same thing.

I personally consider this to be an increasingly important issue so Rust has a definite niche. But will it displace C++? The odds aren't in its favor. But it will certainly be viable.

C is the funniest one though. Asking "Will X replace C?" is a bit like "Will [search startup] replace Google?" The startup landscape is littered with the ocrpses of Google-killers. Likewise the language landscape is littered with the corpses of C-killers. So my money is on "no".


I think D's approach is the right one. It's not a replacement for C, it's a supplement, with no barrier to entry. You can do all of the following: compile C code and call it directly from your D program (no bindings needed), compile D code and run it directly from your C program (no bindings needed), write C code using basically the same syntax but with some additional features (betterC), or write a D program and interoperate with C libraries without writing bindings.

I think a lot of C programmers dislike some parts of the language - something that's true of any language - but they like writing C code the way they have for the last X years. If they're willing to give up the preprocessor, they can keep using C. There's no need to replace C. D has support for all the platforms of GCC and LLVM. That's obviously not as many as C, but it's a lot.


> Verbosity in Java is a big one. IDEs fill it in for you. It doesn't slow you down.

It most definitely does! Perhaps not when writing code. But it does take longer to read/scan verbose code. I'd also argue many kinds of refactoring are made slower by verbosity.


> I am even more frustrated with the moral crusaders from languages like Rust, one of whom went as far as to suggest that I should personally be criminally prosecuted if some downstream Hare software has a use-after-free bug. My goal is not to force anyone who doesn’t like Hare to use it, or issue judgements upon projects which choose another language. In return, I will be pleased if members of other language communities refrain from flaming too much on Hare.

This might be referring to comments elsewhere, but I thought there was a pretty thought-provoking debate about safety tradeoffs in the Hare intro thread.[1]

Which I'd summarize as: let's say we now know how to prevent, say, 70 out of 100 security bugs in C codebases, without performance compromise, by statically ruling out things like buffer overflows and use-after-free; and we also have good evidence that bugs your language ecosystem is bad at detecting are hard to backport detection for. Is it a good idea to make a language that prevents _most_ of those 70 mistakes, but not all that we know how to prevent, in exchange for being simpler, and therefore reducing the other 30 mistakes and getting more software done that helps people? Or would it be better to avoid investing in or relying on new languages with that tradeoff for infrastructure code, and focus on seeing how simple a language can be that prevent all 70?

Which isn't a logic question, but an engineering question: does the mostly-safe language prevent 65 out of 70 memory bugs, or 30 out of 70? Does it let you get twice as much done as the safer language, or 10% more done? Does it result in fewer logic bugs than the more complex language, or the same number?

I don't know, but I'm interested, because I want the next billion lines of code that affect me to do useful stuff and not break. "My goal is not to force anyone who doesn’t like Hare to use it" isn't really an option; I'll be impacted by all the code people write in every language. So: I'm happy to see people make new things that test a new point in the design space! But I'm _also_ happy to see other people say, wait, before I end up with a ton of this code tucked into the lower levels of my machine and the other hundred billion machines wired up to it, what mix of features would convince me that "less safe than we know how to make new languages" is still safe enough for a new language in this case?

[1] https://news.ycombinator.com/item?id=31151937


> moral crusaders from languages like Rust

I think I speak for a majority of the Rust community when I say that we're embarrassed by this sort of behavior and we desperately wish that folks would stop it.


To be honest I've seen more people complain about Rust Community being full of crusaders than the calls to RIIR. I've seen like 2-3 calls on Github, while any Rust adjacent thread compalins about Rust crusaders.


Or nothing at all. Rust has an uphill battle since 10 years which shows it's hard to replace C / another language. Might be Rusts borrow checker that prevents adoption, but I doubt it (though it's the reason I use Rust but don't love it).


I think the problem with Rust and most other languages, is illustrated by what Zig (for me) does right:

My main motivation for using Zig is that it's rapidly becoming a better C toolchain that GCC or Clang. So I'm extremely motivated to compile all my C code with Zig when it becomes possible. After that it's only natural to write more of my code in Zig.

So, my hypothesis is, a language that could replace C has to have a better C compiler built-in than existing C compilers.

Another side of this, is that it has to be extremely easy to use C from the new language and vice-versa. Something I also think Zig does mostly right.


This is a pretty good hypothesis given the history of C++.


I don't know about better, but at least almost as good of a C compiler built in seems like a big selling point. So that's, what, Objective C, C++, D, and Zig?

Perl has Inline::C that allows snippets of C code to be placed directly into Perl code like many C and Pascal compilers allow inline assembly. That's for interoperation and an occasional optimization of course, since Perl is in no way a C replacement for much of what's done in C. Do you foresee a replacement language having such inline sections, being a superset language like C++ or Objective C, or being able to just handle separate files/modules in separate languages?


Rust just crossed 10% in Firefox. Rust is steadily replacing C and C++ in Firefox, as it was designed to do.

https://4e6.github.io/firefox-lang-stats/


You mean, in the application maintained by the organization which devised Rust and is its main proponent? Someone I am not very impressed.


Rust was created to improve the situation where web browsers are written in C++. (I hope we can agree this is bad.) It was not created to impress you.


The combined total of Rust, C, and C++ is only about half of the Firefox codebase. So it's more like 20% (of the code that it might theoretically replace).

Because the code written in HTML and Javascript isn't going to be rewritten in Rust another language, probably ever.


I think one thing that has helped Rust make the first serious advances towards C that we've seen in a long time is also the development tool chain. Getting started with Rust on any major platform, besides embedded development though that's coming along, is so much easier with Rustup and Cargo than the bizarre (to the uninitiated) work you have to do to setup and actually understand your development tool chain in C as well as C++.


Yes (working with Scala, Python, JS/TS in the past), Rust hast the most stable and joyful and complete build environment with rustup, cargo, clippy, audit etc.


> work you have to do to setup and actually understand your development tool chain in C as well as C++

Open visual studio (or Qt) 'File' menu, 'New C/C++ project'. Press F7 to compile.

No command line, no cargo build, no cargo run, no cargo.toml. Just press the green "play" button and your full graphic application shows up.

Embedded with Arduino C++: 'file' menu, then 'new sketch', click the arrow to compile and download to the board.


I work mostly in terminal and won't use clunky IDE programs. I need to be able to

0. Export/upload my source

1. Import/download my remote source

2. Compile my source for different targets

3. Cleanup my source and my compiled source

5. Do all the above for different versions/locations of my source

6. Do all the above for source not my own

7. Do all the above with a simple command and or a simple key->value in a config file

Then I need to do all the above with

8. using a terminal on my personal computer

9. on a remote server using ssh'ed terminal

10. in a script automation file like a docker file or maybe a vim script that setups my dev environment

11. Have 0 - 10 be so easy to do, basically a simple command or two, that I don't have to spend days figuring out tooling FOR EACH TASK and googling mystical error messages for solutions on obscure forums posted 2009.

I stayed with Rust because it has good tooling. Because it's the one thing that hasn't driven me insane at one point or another. I wanted to program in Haskell and c, but I am so sick of the terrible tooling and the terrible conventions of the communities (something Go at least gets right) that I instead program in Rust, a language I enjoy less for it's merit as a language.


Mind you I haven't even mentioned the complexities involved in making dependencies optional, platform specific.

I haven't mentioned conditional compilation.

I haven't mentioned respecting the configuration options of dependencies.

The tooling has to at least do all that using the same syntax for commands and configuration. It has to be predictable enough that I can almost figure it out myself. No black magic.


You have to do a lot of things. I don't envy you at all!

I work mostly in IDEs and won't use the clunky command line if I have an alternative. After all, it's the year 2022. Why working like it's 1991? I don't want to spoil you the end of the movie, but I have this hunch that graphical user interfaces are the future.

And I don't see what you can't do with VS or Qt. Maybe setting up your toolchain with a script or using terminals (which I don't know what have to do with programming IDEs).

I do most of the above with fricking KEIL uVision for a myriad of microcontrollers!

I compile and edit whole Android OS with VSCode and the integrated SSH terminal! (I hate to use the command line, though!).


The point of not strapping a clunky IDE to my workflow is that a terminal offers flexibility and mobility that I (and everyone who programs in 2022. Maybe not you tho) need to fulfill line items 0-10.

Cargo add pkgname on my computer.

Cargo add pkgname on a ssh session.

Cargo add pkgname in a docker script.

Wow the same process for doing it on my dev pc is the same everywhere else! Amazing. What do you do to automate deployment? ssh in and install your whole graphical tool then write a mouse macro to click the compile button?

You are taking a vacation with a U-Haul truck filled with your furniture. I am taking a vacation with a backpack.


It seems we don’t go on vacation to the same places.

There is no need for ssh’ing or recompiling.

For embedded I just give a bin file with everything in it. Use your favorite flashing tool.

For the rest, I can make an installer right from the ide. You know, the kind of 3 click installers the whole world is used to in 2022. An installer I can give grandma. Huge success from the 90’s.

Or maybe redistribute my program with a mobile app market. I can do that from my ide too!

Or maybe I’ll just handle a zip file with everything in it.

I’m afraid you can’t see it’s YOU who are in a niche, and not the other way around.


App store? A windows installer? So you just make user apps? Then publish them on existing platforms built by programmers who had to think about all the complexities you blissfully pretend don't exist?

Just email me a binary to flash with a usb stick? Here's a link to download a zip file, just drag and drop into file explorer. Totally scalable stuff. But then again you just make calculators for the android app store, so I guess you're fine.


Nope. I do embedded C/C++ and what you can read above. I occasionally do some tooling for desktop or low level APIs. But I know what ides are capable of.

Dont make the mistake of thinking you are better than others, otherwise you’ll keep feeding the general feeling of Rust cultism and you’ll end up programming in niche languages from the command line, 1991 style.


You are really obsessed with this 1991 stuff. And here I thought the common prejudice was that only boomers were still stuck using IDE programs. I think you believe yourself to be in an ego battle and that's just not the case for me.


I was discussing "work you have to do to setup and actually understand your development tool chain in C as well as C++", and I mentioned IDEs' easy of use. Then you came along saying that the command line is the way for "everyone who programs in 2022"... which is clearly the opposite. If you don't want to see that... I'm sorry but this conversation doesn't have a point.

And you mentioned "boomers" so I know exactly with who I am talking to.

My ego walks away unharmed from this conversation.


Actually, I did not start by mentioning anything about years or what is up-to-date. You were the first one to bring up 1991 and current year as a derision.

I believe you have a capable need for the tools you choose to use. I wish I could have sympathized, but the snark you engaged with set the wrong tone.


Now walk us through how you would use a relatively large library in your new project (e.g., libav or boost). And then show us how to cross-compile both for another OS + arch.


libav:

On Windows: download shared from here(1), unzip anywhere. Right click on the project, select "Project settings" (IIRC) then point to .libs you want to use (or add them all), add include path (ffmpeg/include). Done.

On linux (Qt): install with apt-get. Point to libs, add include path (in the .pro file).

The only complication is that you have to wrap the #includes with extern "C" {} if you are on C++.

I did it for both platforms the other day and it works. Super easy.

Cross-compile: on Qt you do it through packs you select at install stage. Sometimes it requires the runtime to be installed (like for Android) but it's not complicated.

On VS, I don't know today. I remember for Windows CE you had to install an SDK, and compilation/debug was out-of-the-box (F7 to compile, F5 to debug in-target). And I used Platform Builder for years, and I cross-compiled the entire OS from the menu.

On Arduino: select your board from the menu. Also, installing libraries is done through a menu. Pulled from online resources automatically.

But I guess I will not convince you. You'll come up with another challenge.

(1) https://github.com/BtbN/FFmpeg-Builds/releases


You have just (inadvertently) demonstrated the exact problem: if you don't have a pre-built binary your job is significantly harder. But everyone knows that and most popular libraries do have a pre-built binary officially or unofficially. That makes this kind of discussion harder, because someone is talking about the worst case and another (in this case you) is talking about the best case.


Parent gave two "big libs" as an example (was it a "worst case"?), I choose the one I used the other day and I remember how I used it.

I also remember that I took the "hard" path on Linux and rebuilt ffmpeg, but it was like "git clone", "./configure", "make", "sudo make install". I had to use the command line, but nobody died (being a huge C project and all that).

But we were talking IDEs vs command line and "actually understand your development tool chain in C as well as C++". Of course that niche, worst-case project are going to require more complicated steps. The thing is that the command line and clunkyness is being forced onto the other 99% of the cases too, for simpler projects and dumber persons like myself.


> Parent gave two "big libs" as an example (was it a "worst case"?), I choose the one I used the other day and I remember how I used it.

I consider them to be the best case because everyone wants them so there has been some improvement down the road. I think Cyph0n pick a wrong example for that reason.

> I also remember that I took the "hard" path on Linux and rebuilt ffmpeg, but it was like "git clone", "./configure", "make", "sudo make install". I had to use the command line, but nobody died (being a huge C project and all that).

That is a happy case for the reason I've said before. In my experience many libraries do not build that cleanly, it is a routine challenge to determine which flags I have to put to configure or which library I have to install (that might not exist in my distro so I may have to build them as well). Also `sudo make install` alters the global environment, which is never a great idea in Linux distros; so you have to either make sure to run `./configure --prefix=$HOME` or likes, or just run `make` and pick necessary bits out of the build directories yourself. The latter is painful, but the former risks two competing versions of the same library in the global environment.

Honestly though this experience greatly depends on tasks, and you may have barely hit those worse cases in your life. The Windows SDK and Android SDK you've initially cited are two examples where almost everything is available for you and you don't need as many libraries to continue on.


Just to compare, here's how you would add the latest version of libav to a Rust project using Cargo:

  [dependencies]
  # Safe Rust wrapper
  ffmpeg-next = "5.0.3"
or

  [dependencies]
  # Direct FFI bindings
  ffmpeg-next-sys = "5.0.3"
Note that libav will be built from source, but it will be cached for subsequent builds of your project.

And thanks to the build script maintained by the crate (library) owner, this process should work exactly the same across major OSes and architectures transparently.

Edit: Here is the relevant part of the build script that actually configures and builds libav based on Cargo feature flags and configured rustc OS + arch: https://github.com/zmwangx/rust-ffmpeg-sys/blob/499fca3630fb...


Really? It seems more complicated than that.........

https://github.com/zmwangx/rust-ffmpeg/wiki/Notes-on-buildin...

And "Install FFmpeg (complete with headers) through any means, e.g. downloading a pre-built "full_build-shared".

Come on...


Ah, should have been more explicit: you also need a working C compiler (+ make, pkg-config, etc.) and the libav development headers.

As for libav itself, if prebuilt libraries are not present, the build script will fetch the source and build it as a static lib.


Maybe it's just because I am more familiar with C/C++, but I disagree.

Cargo is very opinionated, forcing upon you certain directory structures and even a default VCS. If you have a large project, cargo makes you jump through hoops. Building multiple binaries and libraries from one project is a pain, and has to be done exactly as cargo wants.

In terms of being able to "just get started", I think `gcc app.c` is way easier than using cargo. But that's not really important. You're not going to replace a million line C project with Rust just because "getting started" was a few seconds faster/slower.


You can `rustc app.rs`, and do all the C Makefile stuff that way too (example in the kernel: https://github.com/Rust-for-Linux/linux/blob/rust/rust/Makef...). Nothing is forcing you to use Cargo except for convenience.


> Cargo is very opinionated, forcing upon you certain directory structures

No, it expects source code in `src` like most C/C++ projects anyway. Then it's `lib.rs` or `main.rs`. The opinionated directory structure (`pkg.rs` or `pkg/mod.rs`) is rustc not cargo.

> even a default VCS

Because it's the one used by most projects. Anyway, nothing prevents you from using mercurial, svn, ...

> Building multiple binaries and libraries from one project is a pain

Have you heard of Cargo workspaces? Because I've had no troubles with it.

> I think `gcc app.c` is way easier than using cargo

Agree to disagree. Because really soon, you'll want to manage dependencies.


> it expects source code in `src` like most C/C++ projects anyway

In C/C++ there's no expectation of the code being anywhere.

> Because it's the [VCS] used by most projects.

I don't see how a VCS being popular is a reason to require it. It's not even "essentially all projects", just "most projects".


> I don't see how a VCS being popular is a reason to require it [...]

Neither do the authors of cargo:

`cargo new --vcs none`


> In C/C++ there's no expectation of the code being anywhere.

That's not what I said. I said that most project do it this way anyway, regardless of any expectations. So who cares?

> I don't see how a VCS being popular is a reason to require it.

It is not required.


And linking? Dependency management?

C & Rust come from different eras of easy-getting-started, IMO:

C - I just want to get shit done, I'm not taking on any dependencies, I'll just write it myself and sure it won't be the best but it'll do what I need and keep it simple;

Rust - I just want to get shit done, surely there's a lib for this and that, I can just glue them together.

So with Rust that's no harder to do than doing anything at all; with C it's quite a bit harder but also a bit easier if you don't.

Agreed that's not going to hold any weight in deliberating switching a million line project, but I do think it matters what hobbyists/spare-timers are using, what people want to work with, etc. Slowly.

(E.g. my day job is mainly python; if we needed something highly performant or embedded or whatever I'd be way more comfortable in Rust than ropey university-C.)


That's Cargo's by-convention auto-discovery. You can manually specify the same information in Cargo.toml if you prefer to customize it.

https://doc.rust-lang.org/cargo/reference/cargo-targets.html...


Does Cargo have the equivalent of Maven’s archetypes? Eg, project templates to generate a particular directory structure, some default files, etc.


Currently, it has two built-in ones for `cargo new` and `cargo init` (--bin and --lib) and there are third-party tools like `cargo generate` which provide for more, but they haven't accepted anything into the main distribution yet.

https://github.com/cargo-generate/cargo-generate


When you set things up on your local machine, then this tooling is great. However, once you want to package your stuff, or want to maintain support for a stable distro, you're in for a hell of a ride if you decide to go with Rust. It's not impossible, but it can give you nightmares.


What is currently missing from https://github.com/cross-rs/cross ?


I don't think cross is related to what I'm talking about at all? In fact, relying on it likely makes things worse for downstreams.

For packaging, you want your dependencies to be packaged as well, with build system using system-wide versions instead of pulling things from Cargo, with proper dynamic linking. There are several tools that help you manage that for Rust, Debian has some packaging helpers too, but it's pain nevertheless once you have to, say, support multiple crate versions in your code to deal with stable distros. Some of these troubles come more from the ecosystem than the tooling itself.


Ahh, the dynamic linking thing.

To some extent, I see that more as "Distros have ossified around the workflows and tooling that is essentially 'a third-party package manager like Conan for C++, but for C'".

How do you deal with the same monomorphization problems in C++ libraries? See https://blogs.gentoo.org/mgorny/2012/08/20/the-impact-of-cxx...

How do you deal with vendored single-header libraries and bespoke implementations brought about by C's lack of a cross-distro, cross-platform dependency handling story? You generally don't.

https://wiki.alopex.li/LetsBeRealAboutDependencies#gotta-go-...

As that article points out, Rust's approach to package management allows things to be shared between projects which, in practice, don't get shared between C projects.

With Crates.io not allowing existing crate versions to have their contents modified and Cargo.lock providing a SHA256-verified record of the exact dependencies your project used, you've got a record of what build dependency versions went into your package, which can be used by tools like `cargo audit`.

Yes, it means that you need tooling to automate rebuilding the binaries whenever they're statically linked but, as Michał Górny pointed out in 2012, that's already necessary with libraries that use C++ templates and, as was said in "Let's Be Real About Dependencies", "On the flip side, I wonder how many times vlc’s XML parser has been fuzzed?"

While it's slow going, discussion and an experimental prototype do exist for embedding full dependency versioning information inside Rust binaries to further help that approach: https://github.com/rust-lang/rfcs/pull/2801


cc main.c is about as easy as it gets, no?


Now add some dependencies, some required, some optional, for all platforms, or some specific to some platforms.

Then add configuration options for conditional compilation, and a build system to define them properly.

Then use the configuration options of your dependencies in your build system.

Finally, repeat this process for every project and make sure you use the same syntax/api for your build system so you don't have to relearn everything everytime.

cc main.c is as easy as it gets when you rewrite everything from scratch.

cargo is as easy as it gets when you want an ecosystem.


Add a closed source dependency in Rust and see how far cargo takes you.


I assume the closed source dependency is distributed as a .a/.so/.dll file, then you can use bindgen[0].

Still, yes it's a bit more complicated. But can we not pretend that the non-existent toolchain in C/C++ is easier in 99% of use cases?

[0] - https://medium.com/dwelo-r-d/using-c-libraries-in-rust-13961...


No, I want a closed source Rust dependency. Otherwise it's going through a C API and I lose all of the benefits of using Rust.

Once you solve this problem, C++ & CMake are of similar complexity for C++ dependencies.

Otherwise to compare C++ toolchains, if it's distributed as a .a file, just adding -l/path/to/libclosed.a works for C++, can't get more simple than that. Or one line in the CMake file with basically the same thing.


A .rlib file then, that you can then add with:

  RUSTFLAGS="--extern closedlib=path/to/libclosedlib.rlib"


Yes, but only for tiny tools or toys.

Most software have dependencies that should be managed, linked in, figure the header file path of etc. Most software consists of several files which you don't want to re-build every time you change just 1 file (unless you change compiler flags etc.). It's a very tiny step from your example until the C or C++ toolchain becomes far from easy.


Does rust have any other implementations? It seems like that would help adoption as well.


Besides the standard llvm-based compiler, I know of at least two others:

https://github.com/Rust-GCC/gccrs

https://github.com/thepowersgang/mrustc


> It seems like that would help adoption as well.

How do you mean? Where?


I've always felt Rust completes more directly with something like C++. It's not nearly as expansive, but the mental space and use cases feel more C++-ish to me. Languages like Zig feels more like a replacement for C.


Rust has only been post-1.0 for 7 years.


I agree, though to me it's not great of a difference if something is struggling for 7 or 10 years [1]

[1] just for reference, 0.1 was announced 2012, 10 years ago, 1.0 was announced 2015, 7 years ago.


According to PyPL's numbers, Rust's growth in popularity has been pretty much exponential since 1.0 with a doubling every ~15 months.


My bet is Nim. Because Python just topped the PL ranks, there is lots of people learning Python as their first/only programming language now. When some of these eventually need to learn a systems language, Nim will have familiar syntax to piggyback on Python's popularity, just like Java or JavaScript did in the past, piggybacking on C curly-brace syntax.


The thing about these newer C alternatives, is that they have to provide compelling reasons for people to want to use it. Not just be a little more modern than C, but features that put it over their competitors.

In the case of Rust, at least they made safety a thing to hang their hat on. As an competitor, how do you "out safety" Rust?

Golang carved out a nice niche for itself, and the involvement of one of C's original creators and Google backing it up sure did help out. And Golang has ignited a group of its own alternatives like Odin and Vlang.

Not sure where Hare can find a niche that isn't already occupied or how it's going to make people want to jump ship to them.


> As an competitor, how do you "out safety" Rust?

Rust is still not totally safe. There's quite bit of code out there that needs to be panic safe, run within a statically-sized arena, be guaranteed to always terminate/progress and so on and so forth. None of these things are ensured by Rust at present (and common language features like arbitrary function calls get in the way of them), while Ada/SPARK and the like at least make some effort to guarantee those.


It's not that hard to out-safety Rust, with its unsafe operations and FFI. The Actix fiasco a while ago was a good example of this. Javascript is the gold-standard here IMO. I can also imagine a systems language that is much safer than Rust, it's not hard if you're familiar with recent developments in the PL realm.


Hare as a project seems to have a very sober, laser focus. I initially didn't get it, but this article made it click.


> Hare as a project seems to have a very sober, laser focus. I initially didn't get it, but this article made it click.

But I don't "get it." Why would I choose Hare over plain old C? What are the motivating features of the language that make it "worth it?"

Remember: It's a pretty big hurdle to write something in a lesser-known language. If I had to convince a team of developers to choose Hare over C/C++/Rust, what would be the argument in Hare's favor?


This [0] article was shared last week about "why" hare. IMO, there are some reasonable improvements to some of the roughest parts of C - lack of namespacing, abundant use of sizeof, bounds checking (with an escape hatch), forced initializiation, "non-nullable" pointers (i.e. references in C++), however all of those don't really matter in the grand scheme of things because hare won't ever run on mac or windows[1].

[0] https://tilde.team/~kiedtl/blog/hare/

[1] https://harelang.org/platforms/


Something like Lua?

It wont run because the QBE compiler doesnt have bindings to those OSes. Only x64.


No, it's a philosophical decision from the project. QBE works on Mac, for example. The link I shared states that it runs on non-x64 platforms, but only on free OS's


It is too early for most, if not all, projects to consider working in Hare over C/C++/Rust. It is a young, incomplete language, still pre-1.0. It may be interesting to those who are open to a more experimental language to work with during its early stages. As it matures, the utility will ideally grow with time.


The phrase "Hare is not interested in taking over the world", especially together with the cute bunny in the top left corner, made me smile. But I'm still a bit suspicious - sure, it looks all cute and fluffy, but there may be some ambition hiding behind those big button eyes...


Hare is not replacing any of those.

Hare doesnt support Windows, so it will never rise to the level of C, Rust or Zig.


Good explanation by the author.

It just confirmed my initial thougts about this language. "A better C", for the people who respect it's ideology.


Hare looks great! As someone who primarily works in C, the only language I consider a realistic C successor is Zig, D (with the -betterC flag), and now Hare. I do think a C successor needs good interoperability with C. I wonder if the creators of Hare are planning to add a Hare to C transpiler? That would certainly help with portability.


As C approaches its 50th anniversary in 2023, people are still trying to improve it. First it was data abstraction C++, ObjectiveC, Java. You could sort of hack that in original C with structures and virtuals. Now is back to improving basic C itself.


Neither C++ nor Java were "attempts to improve C".

It's true that C++ is almost a superset of C, but most C programming idioms would be discouraged in C++.

As for Java - it's a whole different kettle of fish. Intended to run on a virtual machine with opaque high-level abstractions usable by your program; nothing like C.

(I don't know ObjectiveC enough to comment.)


I am surprised D isn't mentioned here. It seems to have found the right balance between C and Python / C++ ( https://dlang.org/ ) and is also mature. I've been exploring some "lower level" language alternatives to C, and find that D, Ada and Pascal (FPC / Lazarus) hold more appeal to me than many of these new languages.


The Pascal/Object Pascal story is quite interesting. It goes to show how important corporate backing and luck are to the popularity of programming languages.


Indeed. Check out its successor Oberon by the same creator. It's modern, simple and powerful. The whole language definition is just 17 pages - https://people.inf.ethz.ch/wirth/Oberon/Oberon07.Report.pdf ... and yet, no PR behind it, so it is nowhere near the popularity of even the fading Pascal.


Probably not but I'm happy to try it out if it stays humble and does its job well.


I lost the humble part at variable re-binding and automagical str type.

And does the job well at implementation-defined int size and a char = u8 (there is a rune type, so why bother?)


This is basically exactly what the article says.


Thanks big chief, I read the article as well.


If developers treat dating like they do programming languages, no wonder there's a demographic crisis.

Clang tidy[0] contains 67032 LOC from my calculations. Rust, Zig, whatever comprise millions LOC. Imagine if 1/10 was contributed to static/dynamic analyzers for C/C++ instead.

[0] https://github.com/llvm/llvm-project/tree/5da7c040030c4af72d...


Hare is not millions of lines of code; in fact, it is approximately the same size as clang tidy. 17,146 lines in the compiler, 59,271 lines in the standard library. These line counts are just wc -l, which is less favorable than e.g. sloccount. Another 11,036 lines for the qbe backend, while LLVM is tens of millions.


I'm summing up the whole repos, everything that was ever written for those languages. Everything that could have gone elsewhere.

This path is not constructive. Maybe instead of trying to be original, we can learn from mathematics where theories remain valid for thousands of years.


I haven't seen anybody mention tooling. I've played around with Rust for purely academic reasons and love how cargo just makes everything work. Feature flags, a built-in test harness, cross-platform compilation, the only fight I've had with Rust has been the language itself. I haven't used it but I like how feature-rich the Zig compiler is too. I really hope strong tooling remains a focus for these newer languages.


somehow the language & mascot gives a feel of Plan9


This is a language I might enjoy using. Clean, simple, pleasant to look at. Does it have a package manager? Couldn't find anything.


Hare does not have a package manager, by design. We feel that package management is best left in the hands of distributions. We don't want the npm/pypi/crates/etc disease to infect Hare - your dependencies should be chosen carefully and conservatively.


> We don't want the npm/pypi/crates/etc disease to infect Hare

That's a pretty harsh and ignorant statement.

Those repositories made developer experience much more enjoyable than the mess that is the C/C++ ecosystem.


I'm not sure how it's ignorant. I sort of prefer the C/C++ way of using/packaging libs. I wouldn't want it with higher level languages like C# or python, but a central repository isn't great in my opinion. Rust has people squatting on package names for no reason. Python packaging is so miserable there are a few different competing 3rd party package managers. NPM has had numerous security issues that have caused way more of a mess than anything C++ related. The C# package manager seems okay, but as far as I know will not package libraries as source code (I'm not sure about that one).

All the package managers I've used have pros and cons, including using none at all.


Why tie your application to a distribution? You're at the mercy of package maintainers, or you'll end up using docker to do package management in a safe way. There's 0 ways to ensure that installing your required dependencies yesterday and tomorrow on a fresh install of Debian won't differ in a critical way? You could do that with NixOS, but I doubt that's the OS/package manager of choice for the target audience of Hare. Relying on OS package managers is partly why docker became popular, far more than most of the designed features of docker and the respective kernel features.

The fundamental issue with security of NPM is the same issue you encounter in any package manager, you have to trust the maintainers. It's harder to trust a larger group of people than a smaller group of people. I'd argue the better solution with NPM is to choose libraries more deliberately and monitor your dependency graph instead of stopping to use NPM altogether.


Shitty package managers are shitty, yet Maven just works. It seems Golang has figured out a workable approach after 10 years as well. Its a solved problem. If other package managers refuse to learn, thats on them.


This is a harsh statement, but not an ignorant one, borne of years of experience with these systems.


It is an ignorant one because without those, the industry would be far from where we are now, the cost of project development would be higher, security fixes would have a very hard time to propagate, leaving holes pretty much everywhere.

> borne of years of experience with these systems.

Which does not dismiss everybody else's experience.


And neither does your experience dismiss mine. I have also seen years of vulnerabilities going unnoticed in pinned dependencies four orders transitively removed from anything the developer has ever heard of, of malware being published without review in PyPI and npm, of bitcoin miners and private key sniffers, of bloated and unreliable code from reckless companies who would prefer to save on FTE salaries by leveraging any code they find lying on the street, all while I've seen the package management system I prefer - the one used by Hare - suffer none of these issues.


I assure you, as a Python developer, that it is not an ignorant statement.

Pypi is a mess. Python packaging is a mess. I would rather download and include Python source code by hand than learn all of those 3rd-party packaging "solutions".

The only saving grace Python has is it's vast standard library. So you don't have to reinvent wheels all the time.

EDIT:

Oh, and the C bindings, too.


The Maven model works pretty well. Namespaces avoid the squatting issues seen in Cargo, and some ground rules avoid NPM disasters like packages disappearing.

I understand not wanting to deal with the hassle of running a repo, but what about tooling around decentralized git repos? I can’t imagine distributions picking up Hare packages in larger numbers.


Yeah, Maven gets little love around here but it’s been around forever and Just Works.


> We feel that package management is best left in the hands of distributions.

So now instead of packaging a library once, you need to do it for every possible distribution?


No, you don't do anything. Each distro packages it for you. It's not the vendor's responsibility to package their software themselves. I wrote about this in detail here:

https://drewdevault.com/2021/09/27/Let-distros-do-their-job....

This plays into Hare's philosophy on packaging.


So if I want to use a library, I have to wait for my distro to determine that it's important enough for them to package it?


Or ask your distro to package it, or contribute it to your distro yourself, or put it in ~/ somewhere and add it to your HAREPATH, or...


Why not just downloading that library source and building it yourself, then?

Hare makes this process painless.


I don't understand why new languages ever use the :: notation for scoping. It's hard to type (holding shift for two keystrokes) and visually noisy.

Obviously, C++ set the precedent and people are familiar with it. But if you're starting a completely new compiler, why make the same old mistakes with syntax?


It is to visually distinguish between module scope and struct scope.

A lot of languages made a mistake of using the dot for both scopes. You get used to it.


> It's hard to type (holding shift for two keystrokes)

Not if you've swapped : and ;, like I have!


Good thing you have to type heaps of semi-colons and double-colons to use Hare!


Just read the original announcement. It hardly contains any useful information regarding the language itself. How does it deal with memory management? What kind of typing system does it use? What makes it better than C? etc.


A lot of this is covered in the introduction: https://harelang.org/tutorials/introduction/


Do these modern C-replacement languages like Hare and Zig have exactly the same security problems related to memory allocation, or do they have additional safeguards against them that C does not?


Both Hare and Zig fix "C's Biggest Mistake": https://www.digitalmars.com/articles/C-biggest-mistake.html.


> Both Hare and Zig fix "C's Biggest Mistake": https://www.digitalmars.com/articles/C-biggest-mistake.html.

I think any new language (including Rust, Go, Nim, etc) that wants to displace C should not be trying to fix C's biggest mistake, it should be trying to replicate C's biggest success: simplicity.

Hence the growing popularity of Go as a C replacement. I look at Zig and Nim, and they look like viable replacements too.

Time will tell.


>I think any new language (including Rust, Go, Nim, etc) that wants to displace C should not be trying to fix C's biggest mistake

Every new language should fix every older mistake that can be done "for free", without growing the language too much or complicating things. C certainly has plenty of those.


> Hence the growing popularity of Go as a C replacement.

Fun fact: I've seen more Python projects being rewritten in Go than C projects being rewritten in Go.

Is Go a Python replacement?


Could be both a C and a Python replacement?

Go seems to present simplicity as a selling point, so looks to be a natural next step for both C and Python programmers.


C sometimes appears to be simple, but there's a lot of complexity in the language. It's just well-hidden until it bites you.


Nim is many good things but I don't think simple is one of them.


Does someone know how the str type works? From lang reference it's pointer/size/capacity struct but none of the examples does any freeing of strings.

What kind of magic is this?


Strings are essentially a usage-constrained slice alias. You can pass a string into "free" and it will free the underlying storage. Check out the spec for the full answer:

https://harelang.org/specification.pdf

They are a very lightweight language feature which is mostly supplemented by the standard library, e.g. the strings module.


Most strings in examples are statically allocated and don't need freeing.

There is an example with freeing of strings in the introduction. Search for strings::freeall.


also there a lot of implementations of "pointer/size/capacity struct" for strings in plain C


You can replace C for new code (arguably) but how the hell does one replace legacy C? Can Unix realistically be rewritten in something else? Or Postgres?



The list is very long. And also I'm not sure for all use cases Rust is a better C.


Postgres certainly can. It's less than two million lines of code. Just Firefox has more Rust code than entire Postgres.


It's possible, just doesn't make much financial sense so I'm not expecting it to happen.


> will

Inevitably, in the steady state? No

If some cataclysmic event occurs, perhaps with key industry influencers’ involvement? Maybe


Betteridge's Law says "no".

https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines

(And so does the article, FWIW.)


> I will do another post which addresses the other question: memory safety.

How is it that developers feel qualified to talk about subjects they understand so little of?


No. My money would be on Zig or Rust replacing C. Hare will remain niche as all Lisp-y languages a destined to do.


No.


This again… no as long as it stays tied to Linux only it has no chance.


As far as I can tell it's a useless language that will not replace anything.

It's memory-unsafe, so you might as well use C++ instead, or if you want a "newer" language, then use Rust instead.


It's appealing to a different set of people. C++ and Rust are much larger languages than C. Hare is actually smaller than C. And while Hare doesn't provide the same kind of memory-safety guarantees that Rust does, it does make it easier to write safe programs than C.


Pointers hell and static typing.... No chance to replace Clang. Only a C-like lang without pointers and type inference would lure programmers to replace C. Something less hostile to the developer.


Just a nitpick Clang is a specific C/C++ compiler implementation using LLVM, not the C language. Names are hard :)

https://en.wikipedia.org/wiki/Clang


C can't be replaced by a language without pointers. If a language without pointers was attractive, a language other than C would already be used. C code is generally used with the intent to make use of pointers and such - we've had less pointy (and more abstract) alternatives forever for use cases that are suited to it.


I don’t know… after working around the performance limitations of JavaScript and Ruby for a decade, I am happy to have an option that allows me to control (at least to some degree) memory layout of my program.

All(ish) code used to be written in C and shell. To what degree had eg JavaScript replaced C already? I think quite a bit.


How would a language that wants to compete with C even work without a feature to directly address memory locations (i.e. "pointers")?

I don't understand the "static typing vs type inference" argument, so I'll not comment on that :) (there are plenty of statically typed languages with type inference)


> How would a language that wants to compete with C even work without a feature to directly address memory locations (i.e. "pointers")?

"Pointer" types could just be library-based and architecture-specific. It doesn't really need to be part of the base language. This would make it easier to support things like GPU-bound code where general memory addressing isn't really a thing, or other features like multiple address spaces, segmented memory or the CHERI memory tagging extension.


Hey Rust zealots, you guys ruin every programming language conversation. I'll happily never use Rust because the community is so awful.


I've learned to like Rust.

  1. You don't need to engage with the community to use a programming language
  2. The guys "ruining" conversations are a (loud) minority, most of the people I've talked with are light years away from proselytism


> I am even more frustrated with the moral crusaders from languages like Rust, one of whom went as far as to suggest that I should personally be criminally prosecuted if some downstream Hare software has a use-after-free bug.

lol "moral crusaders" who want software engineers to take a modicum of responsibility for their code




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: