Hacker News new | past | comments | ask | show | jobs | submit login
Leaving Rust gamedev after 3 years (loglog.games)
1484 points by darthdeus 7 months ago | hide | past | favorite | 972 comments



That's a good article. He's right about many things.

I've been writing a metaverse client in Rust for several years now. Works with Second Life and Open Simulator servers. Here's some video.[1] It's about 45,000 lines of safe Rust.

Notes:

* There are very few people doing serious 3D game work in Rust. There's Veloren, and my stuff, and maybe a few others. No big, popular titles. I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

* He's right about the pain of refactoring and the difficulties of interconnecting different parts of the program. It's quite common for some change to require extensive plumbing work. If the client that talks to the servers needs to talk to the 2D GUI, it has to queue an event.

* The rendering situation is almost adequate, but the stack isn't finished and reliable yet. The 2D GUI systems are weak and require too much code per dialog box.

* I tend to agree about the "async contamination" problem. The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests. I've been pushing back against it creeping into areas that don't really need it.

* I have less trouble with compile times than he does, because the metaverse client has no built-in "gameplay". A metaverse client is more like a 3D web browser than a game. All the objects and their behaviors come from the server. I can edit my part of the world from inside the live world. If the color or behavior or model of something needs to be changed, that's not something that requires a client recompile.

The people using C# and Unity on the same problem are making much faster progress.

[1] https://video.hardlimit.com/w/7usCE3v2RrWK6nuoSr4NHJ


> I'd expected some AAA title to be written in Rust by now.

I'm disinclined to believe that any AAA game will be written in Rust (one is free to insert "because Rust's gamedev ecosystem is immature" or "because AAA game development is increasingly conservative and risk-averse" at their discretion), yet I'm curious what led you to believe this. C++ became available in 1985, and didn't become popular for gamedev until the turn of the millenium, in the wake of Quake 3 (buoyed by the new features of C++98).


Lamothe's Black Art book came out in '95. Abrash's black book came out in '97.

Borland C++ was pretty common and popular in 93 and we even had some not-so-great C++ compilers on Amiga in 92/93 that had some use in gamedev.

SimCity 2000 was written in C++, way back in '93 (although they started with Cfront)

An absolute fuckton of shareware games I was playing in the 90s were built with Turbo C++.


Kind of true, however they had endless amounts of inline Assembly, as shown on the Black Book as well.

I know of at least a MS-DOS game, published on Portuguese Spooler magazine, that was using Turbo C++ basically as a macro assembler.

One of the PlayStation selling points for developers was being the first home console with a C SDK, while SEGA and Nintendo were still doing Assembly, C++ support only came later to the PlayStation 2.

While I agree C++, BASIC, Turbo Pascal, AMOS were being used a lot, specially in the Demoscene, they were our Unity, from the point of view of successful game studios.


I also remember by videogame magazines I was reading back in early 90s that another C++ compiler that was a favourite among devs was Watcom C++ that was released in 88.


That doesn't mean that it was used primarily with C++ though. IIRC Watcom C/C++ mainly became popular because of Doom, and that was written in C (as all id games until Doom 3 in 2004 - again IIRC though).

The actual killer feature of Watcom C/C++ was not the C or C++ compiler, but its integration with DOS4GW.


Btw, dont’t remember Turbo C or Borland C++ to be able to compile to 32-bit x86 on DOS


Borland C++, Microsoft C/C++, and GCC (DJGPP[1]) could all target 32-bit extended DOS, but Watcom was the first[2] to bundle a royalty-free DOS extender[3].

[1] https://news.ycombinator.com/item?id=39038095

[2] https://www.os2museum.com/wp/watcom-win386/

[3] https://en.wikipedia.org/wiki/DOS_extender


OMG, the name "Watcom" just opened a flood of nineties memories of the demo scene for me. Thanks for mentioning.


I really hope that C++ evolves with gamedev and they become more and more symbiotic.

Maybe adoption of rust by gamedev community isn't the best thing to wish to happen to language. Maybe it is better to let other crowd to steer evolution of rust, letting system programming and gamedev drift apart


I think I don't know a single gamdev who's fond of "modern C++" or even the C++ stdlib in general (and stdlib changes is what most of "modern C++" is about). the last good version was basically C++11. In general the C++ committee seems to be largely disconnected from reality (especially now that Google seems to be doing its own C++ successor, but even before, Google's requirements are entirely different from gamedev requirements).


C++17/20 are light-years beyond C++11 in terms of ergonomics and usability. Metaprogramming in C++11 is unrecognizable from C++20 things have improved so much. I hated C++ before C++11 but now C++11 feels quite legacy compared to even C++17. The ability to write almost anything, like a logging library, without C macros is a huge improvement for maintainability and robustness.

Most of the features in modern C++ are designed to enable writing really flexible and highly optimized libraries. C++ rarely writes those libraries for you.


Heh, mentioning metaprogramming and logging is not exactly how you convince anybody of superior ergonomics and usability.


Metaprogramming is required to get typesafe easy to use code. The problem of most template code is that the implementation gets horrendously complicated but for the user it can create A LOT of comfort. At work for example, I wrote a function that calls an rpc-method and it has a few neat features like:

An rpc call with a result looks like this:

call(<methodinfo>, <param>, [](Result r) {});

vs one which returns void:

call(<methodinfo>, <param>, []() {});

It's neat that the callback reflects that, but this wouldn't be possible without some compiletime magic.


It convinced me


Hi, I'm a game developer and I'm fond of "modern C++" and the stdlib. Sure, I would like some priorities to be different (i.e. we should have had static reflection a while ago), but it's still moving in the right direction.

Particularly the idea that "the last good version was basically C++11" is exactly what I would expect to hear from someone who reads a few edgy articles on the internet but has no actual in-depth experience working with the language. C++14 and 17 are, for a large part, plain ergonomic upgrades over C++11, with lots of minor but impactful additions and improvements all over. I can't even think of anything in those two versions that would be sufficiently controversial to make anyone prefer C++11 over them, or call it the "last good version".

C++20 is obviously a larger step, and does include a few more controversial changes, but those are completely optional (and I don't expect many of them to be widely adopted in gamedev for a decade at least, even though for some I wish it went more quickly).


> stdlib changes is what most of "modern C++" is about). the last good version was basically C++11.

I can only comment this like: tell me you have no idea about current state of C++ without telling me you have no idea about current state of C++.


Then let's hear some counter examples please. As far as I'm aware the last important language change since C++11 was designated init in C++20, and that's been butchered so much compared to C99 that it is essentially useless for real world code.


There a whole bunch of features and fixes that each new version of the standard proclaimed, which severely affected usability, expressibility and convenience of the language. Describing many of them could easily take an hour. I'm sorry, I can only highlight a few of my particular favourites that I regularly use and let you study the rest changes.

https://en.cppreference.com/w/cpp/14

- fixed constexpr, which in C++11 was basically unusable

- great improvements for metaprogramming, which made such gems as `boost::hana` possible, such as variable templates and generic lambdas.

- function return type deduction

https://en.cppreference.com/w/cpp/17

- inline variables finally fixes the biggest pain of developing header-only libraries

- useful noexcept fix

- if constexpr + constexpr lambdas

- structured bindings

- guaranteed copy elision

- fold expressions

I'm at automotive where due to safety requirements we just barely started to work with C++17, so I don't have much practical experience of the standards past it, though I'm aware there are great updates too. Overall - C++11 is as horrible compared to C++17, as C++98 and roughly 03 were compared to ground breaking back then C++11. Personally, when I skim though job vacancies and see they are stuck at C++11, I pass it. Even C++14 makes me very sceptical, even though I used it really a lot. All due to new nice improvements of C++17.

https://en.cppreference.com/w/cpp/20

https://en.cppreference.com/w/cpp/23


Ok, I'll give you fold expressions and structured bindings as actually important language updates. The rest are mostly just tweaks that plug feature gaps which shouldn't have existed in the first place when the basic feature was introduced in C++11 or earlier.

IMHO by far most things which the C++ committee accepts as stdlib updates should actually be language changes (like for instance std::tuple, std::variant or std::range). Because as stdlib features those things make C++ code more and more unreadable compared to "proper" syntax sugar (Rust suffers from the exact same problem btw).


He missed concepts and modules which are also c++20 features, modules are just not properly supported (yet). Concepts are a massive QoL feature and modules might help with compile times.

> IMHO by far most things which the C++ committee accepts as stdlib updates should actually be language changes

From my experience thats not how the c++ committee works. They generally decompose requested features into the smallest building blocks and just include those in the language and let the rest be handled by the stdlib.

The thing that makes C++ unreadable in my opinion is template code and the fact that the namespace system sucks and just leads to unreadably long names (std::chrono::duration_cast<std::chrono::milliseconds>(.....)).


[flagged]


You should probably tone done your speech, and lay off the patronizing attitude, no matter how well justified are your artguments.


Oh I followed the C++ standardization process quite closely for about 15 years up until around C++14 and still follow it from the sidelines (having mostly switched back to C since then), and I'm fully aware of the fact that C++ has designed itself into a complexity corner where it is very hard to add new language features (after all, C++ has added more new problems that had then to be fixed in later standards than it inherited from C in the first place).

I still think the C++ committee should mainly be concerned about the language instead of shoehorning stuff into the stdlib, even if fixing the language is the harder problem.

And I can't be alone in this frustration, otherwise Carbon, Circle and Herb Sutter's cppfront wouldn't have happened.


It's even worse than that, because even if a new proposal had no concerns from a language & library point of view, it can still be crippled by vendor concerns because of short-sighted, entirely unforced errors the vendors made, often decades prior.

It's part of why I don't believe the C++-compatible C++-successor languages will deliver on their promises nearly as well as they think. They only solve half of the problem, which is that their translation units don't have to accommodate legacy C++ syntax.

They still have to reproduce existing C++ semantics and ABIs, their types still have to satisfy C++ SFINAE and Concepts, etc. so they're bringing all of the semantic baggage no matter what new syntax they dress it in.

And anywhere they end up introducing new abstractions to try to enforce safety, those will be incompatible with C++ enough to require hand-crafted wrappers, just like we already do with Rust, only Rust is much further along its own maturity and adoption curve than those languages are.


A practical example on C++14 & its constexpr+variable templates fixes, and why this was important: a while ago I wrote a wrapper over a compile-time fixed size array that imposed a variable compile-time fixed tensor layout on it. Basically, it turned a linear array into any matrix, or 3D or 4D or whatever -D is needed tensor and allowed to efficiently work with them in compile time already. There was obviously constexpr constuction + constexpr indexing + some constexpr tensor operations. In particular there was a constexpr trace operation for square matrices (a sum of the elements on the main diagonal, if I'm not mistaken). I decided to showcase the power of constexpr to some juniors in the team. For some reason, I thought that since the indexing operation is constexpr, then computing the matrix trace would require a compiler to just take elements of the matrix at precomputed at compile time addresses, which will be seen in the disassembly as memory loads from fixed offsets (without computing these offsets in runtime, since matrix layout is fixed in a compile time and index computation is constexpr operation). So I quickly wrote an example, compiled it with asm output, and looked at it... It was a facepalm moment - I forgot that trace() was also constexpr, so instead of doing any runtime computations at all, the code just had already computed trace value as a constant in a register. How is it not cool? Awesome!

Such things are extremely valueable as they allow to write much more expressive and easy to understand and maintain code for entities known in a compile time.


I sometimes wonder if the problem with rust is that we have not yet had a major set of projects which drive solutions to common dev problems.

Go had google driving adoption, which in turn drove open source efforts. The language had to remain grounded to not interfere with the doing of building back-end services.

Rust had mozilla/servo which was ultimately unsuccessful. While there are more than a few companies uinf rust for small projects with tough performance guarantees - I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.


Microsoft is rewriting quite a bit of their C# to Rust for performance reasons. Especially within their business line products. Rust have also become rather massive in the underlying tech in the telecommunications infra structure in several countries.

So I’m not sure that your take is really so on point. Especially as far as comparing it with Go goes (heehee), at least not in terms of 3rd party libraries where most of the Go ecosystems seems to be either maintained by one or two people or abandoned as those two people got new jobs. I think Go is cool by the way, but there is a massive difference in the maturity of the sort of libraries we looked into using during our PoCs.

Anyway. A lot of Rust adoption is a little quiet, and well, rather boring. So maybe that’s why you don’t hear too much about it.


Quiet adoption often means that a couple people in a company chose to invest in at least a small effort. It's unknown if those people would do it again, and they are unlikely to invest 2-3 devs to improve the rust library and language ecosystem.

Major adoption gets you tools like guice, 50+ person tools teams, and more.


Microsoft rewrote one, maybe two microservices as it was driven by a lead interested in using Rust and is rewriting parts of NT kernel (way more important).


It’s much more than that, even now they are continuously opening job postings with a focus on re-writing the 365 platform from C# to Rust.


It’s a bad habit to read too much into a single job posting.

(oh, I remember now, it’s the account traumatized by odata)


I’m not sure why you’re trying to make it seem like Microsoft isn’t rewriting the core of their 365 business products from C# to Rust, but you do you I guess.

As far as I’m aware I was never traumatised by OData. It’s true that I may have ranted about the sorry state of the public packages available outside of C# or Java. Not unwarranted criticism I think, but I wrote our own internal adaptation which now powers basically all our API clients for Typescript as a single shared no-dependency library.

But you seem to think you know me? Have we met?


Alright, if not for that one job posting, I’m curious where you are getting this information from?


I really think the problem of Rust is the borrow checker. Seriously. It is good but it is overkill. You have to do and plan all things around it and discourages a lot of patterns or makes them really difficult to refactor.

I would encourage people to understand Hylo's object model and mutable value semantics. I thinks something like that is far better, more ergonomic and very well-performing (in theory at least).


You can use unsafe code and pointers if you really want, but code will be unsafe, like C or C++.


Look at Hylo. Tell me what you think. You do not need all that juggling. Just use value semantics with lazy copying. The rest is handled for you. Without GC. Without dangling pointers.


TBF, unsafe Rust still enforces much more correctness than C or C++ (Rust's "unsafety" is more similar to Zig than C or C++).


TBF this is not really true. Unsafe Rust is a lot harder than comparable C/C++, because it must manually uphold all safety invariants of Safe Rust whenever it interacts with idiomatic Rust code. (These safety invariants are also why Safe Rust can often be compiled into better-optimized code than the idiomatic C/C++ equivalent.)


With more enforced correctness of Rust (also unsafe Rust) I mean small details like Rust not allowing implicit conversion between integer types. That alone eliminates a pretty big source of hidden bugs both in C and C++ (especially when assigning a wider to a narrower type, or mixing signed and unsigned integers).

All in all I'm not a big fan of Rust, but details like this make a lot of sense (even if they may appear a bit draconic at first) - although IMHO Zig has a slightly better solution by allowing implicit conversions that do not lose information. E.g. assigning a narrower to a wider unsigned integer type works, but not the other way around.


I wonder if Rust is killing flies with canons (as we say in spanish). There are perfectly safe alternatives or very safe ones.

Even in a project coded in Modern C++ with async code included, activating all warnings (it is a cards game) I found two segfaults in like almost 5 years... It can happen, but it is very rare at least with my coding patterns.

The code is in the tens of thousands of lines of code I would say, not sure 100%, will measure it.

Is it that bad to put one share pointer here and there and stick to unique pointers and try to not escape references? This is ehat I do and I use spans and string views carefully (you must with those!). I stick to the rule of zero. With all that it is not that difficult to have mostly safe code in my experience. I just use safe subsets except in a handful of places.

I am not saying C++ is better than Rust. Rust is still safer. What I am saying is that an evolution of the C++ model is much more ergonomic and less viral than this ton of annotations with a steep learning curve where you spend a good deal of your time fighting the borrow checker. So my question is:

- when it stops being worth to fight the borrow checker and just replace it with some alternative, even smart pointers here and there? Bc it seems to have a big viral cost and refactoring cost besides preventing valid patterns.


> What I am saying is that an evolution of the C++ model is much more ergonomic and less viral than this ton of annotations with a steep learning curve where you spend a good deal of your time fighting the borrow checker. So my question is:

That "evolution of the C++ model" (the C++ Core Guidelines) has an even steeper learning curve than Rust itself, and even more invasive annotations if you want to apply it across the board. There is no silver bullet, and Rust definitely has the more principled approach to these issues.


I'm not answering your question here, just saying my opinion on C++ vs Rust. I think that the big high-level difference (before diving into details like ownership and the borrow checker) is that C++'s safety is opt-in, while Rust's safety is opt-out. So in C++ you have to be careful each time you allocate or access memory to do it in a safe way. If you're working in a team, you all have to agree on the safe patterns to use and check that your team members are sticking with it during code rewiews. Rust takes this burden from you, at the expense of having to learn how to cooperate with the borrow checker.

So, going back to your question, I think that the answer is that it depends on many factors, including also some non-strictly-technical ones like the team's size.


An evolution of the C++ model could be something like Hylo. Hylo is safe. Hylo does not need a borrow checker. Hylo does not need a garbage collector.

That is what I mean by evolution. I do not mean necessarily C++ with Core Guidelines.


I think you replied to the wrong reply.


Unsafe Rust is not harder or safer than C/C++. If you can uphold all safety invariants for C/C++ code (OMG!), then it will be easier to do same thing for unsafe Rust, because Rust has better ergonomic.


Better ergonomics for what? For refactoring with a zillion lifetime annotations? Annotations go viral down the stack call. That is a headache. Not useless. I know it is useful. Just a headache, a price to pay. For linked structures? For capturing an exception.

No, it is not more ergonomic. It is safer. That's it.

And some parts of that enforcement via this model is terribly unergonomic.


? I believe the Rust efforts in Firefox were largely successful. I think Servo was for experimental purposes and large parts were then added to Firefox with Quantum: https://en.wikipedia.org/wiki/Gecko_(software)#Quantum


My recollection was that those were separate changes - servo didn’t get to the stage where it could be merged, but it was absolutely the plan to build a rendering engine that outperformed every other browser before budget cuts hit.


We did port Servo’s WebRender to Firefox and shipped it everywhere. The only caveat is that it took multiple years of upgrades, fixes, and rewriting it.


It would be interesting to hace a postmortem of what went well, wrong, etc. for this initial effort.

I believe work continues now somewhere else but it would be absolutely nice to know more from the experience from others.


> Go had google driving adoption

This is commonly said but I think it's only correct in the sense that Google is famous and Google engineers started it.

Google never drove adoption; it happened organically.


> Rust had mozilla/servo which was ultimately unsuccessful.

There's lots of Rust code in Firefox!

> I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.

Meta has a lot of Rust internally.

The problems with Rust for high-level indie game dev logic, where you're doing fast prototyping, are very specific to that domain, and say very little about its applicability in other areas.


Servo is an ongoing project, it has not "failed" or been unsuccessful in any sense.


I think the original poster is perhaps speaking to previous articles (ie https://news.ycombinator.com/item?id=39269949) which from the outside looking in made me feel that perhaps this infact was the case (at least for a period).


Exactly, it's all about the ecosystem and very little about the language features


Kind of both in my opinion. But rust is bringing nothing to the table that games need.

At best rust fixes crash bugs and not the usual logic and rendering bugs that are far more involved and plague users more often.


The ability of engines like Bevy to automatically schedule dependencies and multithread systems, which relies on Rust's strictness around mutability, is a big advantage. Speaking as someone who's spent a long time looking at Bevy profiles, the increased parallelism really helps.

Of course, you can do job queuing systems in C++ too. But Rust naturally pushes you toward the more parallel path with all your logic. In C++ the temptation is to start sequential to avoid data races; in systems like Bevy, you start parallel to begin with.


Aside from a physics simulation, I'm curious as to what you think would be a positive cost benefit from that level of multithreading for the majority of game engines. Graphical pipelines take advantage of the concept but offload as much work as possible to the GPU.


We were doing threading beyond that in 2010, you could easily have rendering, physics, animation, audio and other subsystems chugging along on different threads. As I was leaving the industry most engines were trending towards very parallel concurrent job execution systems.

The PS3 was also an interesting architecture(i.e. SPUs) from that perspective but it was so distant from the current time that it never really took off. Getting existing things ported to it was a beast.

Bevy really nails the concurrency right IMO(having worked on AA/AAA engines in the past) it's missing a ton in other dimensions but the actual ECS + scheduling APIs are a joy. Last "proper" engine I worked on was a rats-nest of concurrency in comparison.

That said as a few other people pointed out, the key is iteration, hot-reload and other things. Given the choice I'd probably do(and have done) a Rust based engine core where you need performance/stability and some dynamic language on top(Lua, quickjs, etc) for actual game content.


> That said as a few other people pointed out, the key is iteration, hot-reload and other things. Given the choice I'd probably do(and have done) a Rust based engine core where you need performance/stability and some dynamic language on top(Lua, quickjs, etc) for actual game content.

I fully agree that this will likely be the solution a lot of people want to go with in Bevy: scripting for quick iteration, Rust for the stuff that has to be fast. (Also thank you for the kind words!)


Yeah, it's a fairly clean and natural divide. You see it in most of the major engines and it was present in all the proprietary engines I worked on(we mostly used Lua/LuaJIT since this predated some great recent options like quickjs).

We even had things like designers writing scripts for AI in literate programming with Lua using coroutines. We fit in 400kb of space for code + runtime using Lua on the PSP(man that platform was a nightmare but the scripting worked out really well).

Rust excels when you know what you want to build, and core engine tech fits that category pretty cleanly. Once you get up in game logic/behavior that iteration loop is so dynamic that you are prototyping more than developing.


In big-world high-detail games, the rendering operation wants so much time that the main thread has time for little else. There's physics, there's networking, there's game movement, there's NPC AI - those all need some time. If you can get that time from another CPU, rendering tends to go faster.

I tend to overdo parallelism. Load this file into a Tracy profile, version 0.10.0, and you can see what all the threads in my program are doing.[1] Currently I'm dealing with locking stalls at the WGPU level. If you have application/Rend3/WGPU/Vulkan/GPU parallism, every layer has to get it right.

Why? Because the C++ clients hit a framerate wall, with the main thread at 100% and no way to get faster.

[1] https://animats.com/sl/misc/traces/clockhavenspeed02.tracy


Animations are an example. I landed code in Bevy 0.13 to evaluate all AnimationTargets (in Unity speak, animators) for all objects in parallel. (This can't be done on GPU because animations can affect the transforms of entities, which can cause collisions, etc. triggering arbitrary game logic.) For my test workload with 10,000 skinned meshes, it bumped up the FPS by quite a bit.


"Fearless concurrency"


C++ classes with inheritance are a pretty good match for objects in a 3D (or 2D) world, which is why C++ became popular with 3D game programmers.


This is not at all my experience.

What I have experienced is that C++ classes with inheritance are good at modeling objects in a game at first, when you are just starting and the hierarchy is super simple. Afterwards, it isn’t a good match. To can try to hack around this in several ways, but the short version of it is that if your game isn’t very simple you are better off starting with an Entity Component System setup. It will be more cumbersome to use than the language-provided features at first, but the lines cross very quickly.


I like the Javascript way of objects just having fully mutable keys/values like dictionaries, with no inheritance or static typing.


Hmm no not really in my experience. Even the old "Entities and Components" system in Unity was better, because it allowed to compose GameObject behaviour by attaching Component objects, and this system was often replicated in C++ code bases until it "evolved" into ECS.


This is how I feel about golang and systems programming. The strong concurrency primitives and language simplicity make it easier to write and reason about concurrent code. I have to maintain some low level systems in python and the language is such a worse fit for solving those problems.


Yeah, OOP makes sense for games. The language will matter a bit for which one takes off, but anything will work given enough support. Like, Python doesn't inherently make a lot of sense for data processing or AI, but it's good enough.


OOP kind of goes out the window when people start using entity component systems. Of course, like the author, I'm not sure I'll need ECS since I'm not building a AAA game.


Had to look up ECS to be honest, and it's pretty much what I already do in general dev. I don't care to classify things, I care what I can do with something. Which is Rust's model.


Interfaces or traits are not ECS though. ECS is mostly concerned about how data is layed out in memory for efficient processing. The composability is (more or less) just a nice side effect.


This is correct. I wonder how Rust models SoA wirh borrowing. Is it doable or becomes very messy?

I usually have some kind of object that apparently looks like OOP but points all its features to the SoA. All that would be borrowing and pointing somewhere else in slices or similar in Rust I assume?


AFAIK tagged-index-handles are typically used for this (where the tag is a generation-counter to detect 'dangling handles'), which more or less side-steps the borrow checker restrictions (e.g. see https://floooh.github.io/2018/06/17/handles-vs-pointers.html).


Sorry I got lost in that sentence. What is Rust's model?


Rust has traits on structs instead of using inheritance. Aka composition.


Even PHP as traits by now. Languages tend to incorporate others Languages successful features. There is of course feature inflation risk of course. There are Languages that take as a goal to avoid that inflation, such as Zig, or that arrives there as a byproduct of being very focused in a specific use case like AWK.


AFAIK composition, in the traditional sense, means that you put your objects/concepts together from different smaller objects or concepts. Composition would be to have a struct Car that uses another struct called Engine to handle its driving needs. A car “has a” engine. A trait that implements the “this thing has an engine” behavior isn’t composition, it’s actually much closer to [multiple] inheritance (a car “is a” motorized vehicle).


Traits do implement interface inheritance, but that doesn't have the same general drawbacks as implementation inheritance (such as the well-known "fragile base class" problem).


I don't know the terminology. I just know that Rust does whatever the alternative is to the Java way with inheritance. You don't get stuck with the classic classification problem.


But that... wasn't in your comment at all...

If I say "I don't care about safety, I care about expressiveness. Which is Rust's model"... "which" has to refer to one of the other things I just mentioned (safety or expressiveness) not some other concept.


You can also have structs be generic over some "tag" type, which when combined with trait definitions gets you quite close to implementation inheritance as seen in C++ and elsewhere. It's just less common because usually composition is all that's required.


To be clear, the reason why Python is so popular for data wrangling (including ML/AI) is not due to the language itself. It is due to the popular extensions (libraries) exclusively written in C & C++! Without these libraries, no one would bother with Python for these tasks. They would use C++, Java, or .NET. Hell, even Perl is much faster than Python for data processing using only the language and not native extensions.


Python makes sense because of accessibility and general comfort for relatively small code bases with big data sets.

Those data scientists at least from my experience are more into math/business than interested in most efficient programming.

Or at least that was the situation at first and it sticked.


Disagree the adoption of C++ was more about Moore's law than ecosystem, although having compilers that were beginning to not be completely rubbish also helped.


Also C++ could be adopted incrementally by C developers. You could use it as “C with classes”, or just use operator overloading to make vector math more tolerable, or whatever subset that you happened to like.

So there’s really three forces at play in making C++ the standard:

1) The Microsoft ecosystem. They literally stopped supporting C by not adopting the C99 standard in their compiler. If you wanted any modern convenience, you had to compile in C++ mode. New APIs like Direct3D were theoretically accessible from C (via COM) but in practice designed for C++.

2) Better compilers and more CPU cycles to spare. You could actually count on the compiler to do the right thing often enough.

3) Seamless gradual adoption for C developers.

Rust has a good compiler, but it lacks that big ticket ecosystem push and is not entirely trivial for C++ developers to adopt.


I'd say Rust does have that big ticket ecosystem push. Microsoft has been embracing Rust lately, with things like official Windows bindings [1].

The bigger problem is just inertia: large game engines are enormous.

[1]: https://github.com/microsoft/windows-rs


Repo contributor here, just to curb some expectations a bit: it's one very smart guy (Kenny), his unpaid volunteer sidekick (me), and a few unpaid external contributors. (I'm trying to draw a line between those with and without commit access, hence all the edits.)

There's no other internal or external Microsoft /support/ that I'm aware of. I wouldn't necessarily use it as a signal of the company's intentions at this time.

That said, there are Microsoft folks working on the Rust compiler, toolchain, etc. side of things too. Maybe those are better indicators!


That's disappointing on Microsoft's part, because their docs make it seem like windows-rs is the way of the future.

Thanks for your work, though!


Don't be, they also killed C++/CX, even went to CppCon 2016 telling us how great future C++/WinRT would bring to us.

Now almost a decade later, VS tooling is still not there, stuck in ATL/VC++ 6.0 like experience (they blame it on the VS team), C++/WinRT is in maintenance, only bug fixes, and all the fun is on Rust/WinRT.

I would never trust this work for production development.


I wish Microsoft had any direction on the 'way of the future' for native apps on Windows


If they did publish a “way of the future” direction, would you believe them?

Fool me N times then shame on them, fool me N+1 times, then shame on me sort of thing.


The most infuriating thing is their habit of rebuilding things just about the time they reach a mature and highly stable state, creating an entirely new unstable and unreliable system. And then the time that system almost reaches a stable state - it's scrapped and it starts all over again.

WPF -> UWP -> WinUI -> WinUI 2 -> WinUI 3 is just such a ridiculous chain. WPF was awesome, highly extensible, and could have easily and modularly been extended indefinitely - while also maintaining its widespread (if unofficial) cross platform support and just general rock solid performance/stability. Instead it's the above pattern over and over and over.

And now it seems WinUI 3 is also dead, alas without even bothering with a replacement. Or maybe that's XAMARIN, wait I mean MAUI? Not entirely joking - I never bothered to follow that seemingly completely parallel system doing pretty much the same things. On the bright side this got me to finally migrate away from Microsoft UI solutions, which has made my life much more pleasant since!


I'd have bought into MAUI if there was Linux support in the box.


I'd say the inertia is far more social than codebase size related. Right now whilst there are pockets of interest there is no broader reason to switch. Bevy as the leading contender isn't going to magic it's way to being capable of shipping AAA titles unless a studio actually adopts it. I don't think it's actually shipped a commercially successful indie game yet.

Also game engines emphatically don't have to be huge. Look at Balatro shipping on Love2d.


> Also game engines emphatically don't have to be huge. Look at Balatro shipping on Love2d.

Balatro convinced me that Love2D might be a good contender for my next small 2D game release. I had no idea you could integrate Steamworks or 2D shaders that looked that good into Love2D. And it seems to be very cross-platform, since Balatro released on pretty much every platform on day 1 (with some porting help from a third party developer it seems like).

And since it's Lua based, I should be able to port a slightly simpler version of the game over to the Playdate console.

I'm also considering Godot, though.


There’s a pretty big difference between the Playdate and anything else in performance but also in requirements for assets. So much so I hope your idea is scoped accordingly. But yeah Love2d is great.


It is. I've already half ported one of my games to the Playdate (and own one), I'm pretty aware of its capabilities.

The assets are what I struggle with most. 1-bit graphics that look halfway decent are a challenge for me. In my half-ported game, I just draw the tiles programatically, like I did in the Pico-8 version (and they don't look anywhere near as good as a lot of Playdate games, so I need to someday sit down and try to get some better art in it).


There are a few successful games like Tunnet [1] written in Bevy.

[1]: https://store.steampowered.com/app/2286390/Tunnet/


Looks cool and well received but at ~300ish reviews hardly a shining beacon if we extrapolate sales from that. But I'll say that's a good start.


Speaking as a Godot supporter, I don't think sales numbers of shipped games are relevant to anyone except the game's developer.

When evaluating a newer technology, the key question is: are there any major non-obvious roadblocks? A finished game (with presumably decent performance) tells you that if there are problems, they're solvable. That's the data.


Game engines are tools not fan clubs. It’s reasonable to judge them on their performance for which they are designed. As someone who cares about the commercial viability of their technology choices this is a small but positive signal.

What it tells me is someone shipped something and it wasn’t awful. Props to them!


> A finished game (with presumably decent performance) tells you that if there are problems, they're solvable.

It doesn't tell you anything about velocity, which is by far the most important metric for indie devs.

After all, the studio could have expended (maybe) twice as much effort to get a result.


Or maybe Rust allowed them to develop twice as fast. Who knows? We're going by data here, and this data point shows that games can be made in Bevy. No more and no less.


Agreed. We've learned a lot from Godot, by the way. I consider all us open source engines to be in it together :)


So far I am way less productive in rust than in any language I've ever used for actual work, so to rewrite an entire game engine would seem like commercial suicide.


"so far" is doing a lot of heavy lifting there =)

I was the same the first two times I tried to use rust (earnestly). However, one day it just "clicked" and my productivity exceeds that of almost anything else, for the specific type of work I'm doing (scientific computation)


I think we shouldn't expect any language to lead different programmers to the same experiences. Rust has the inital steep learning curve, and after that it's a matter of taste whether one is willing to forge on and turn it into a honed tool. Also, I think it's clear that Rust excels in some fields far more naturally than in others. Making blanket statements about how Rust, or any language, is (un)productive is a disservice to everyone.


Yes, the Google folks are also funding efforts to improve Rust/C++ interop, per https://security.googleblog.com/2024/02/improving-interopera...


Thanks for the link. This one was also posted awhile back in a rust comment and when I first read it, I thought Google had used Rust in the V8 sandbox, but re-reading it seems that the article uses Rust as an ‘example’ of a memory safe language but does not explicitly say that it uses Rust. Maybe someone with more knowledge can confirm that Rust was (or was not) used in the V8 Google Chrome sandbox example….

https://v8.dev/blog/sandbox


Rust is not used in V8, to my knowledge.


That description of problems bodes well for Zig


Theoretically accessible describes the experience of trying to use D3D from C very well!

Was trying to use it with some kind of gcc for windows. The C++ part was still lacking some required features, so it was advised to use D3D from C instead C++. There were some helper macros, but overall I was glad when Microsoft started to release their Express (and later Community) Editions of Visual Studio.


I access D3D(11) from C in my libraries and tbh it's not any different from C++ in terms of usability (only difference is that the "this" argument and vtable indirection is implicit in C++, but that's just syntax sugar that can be wrapped in a macro in C).


not true anymore, c11 and c17 are either supported or coming

https://devblogs.microsoft.com/cppblog/c11-and-c17-standard-...


Not really relevant to 30 years ago though.


I worked on many of Activision's games 1995-2000 and C++ was the overwhelming choice of programming language for PC games. C was more common for console. In 1996 the quality of MSFT IDE/ Compiler, plus the CPUs available at the time was such that it could take an hour to compile a big game. By 1998 it was a few minutes. As I recall I think MSFT purchased another companies compiler and that really changed Visual Studio.


I was a developer on the Microsoft C++ compiler team from 1991 to 2006. We definitely didn't purchase someone else's compiler in that time. We looked at the EDG front end at various times but never moved over to it while I was there.

Perhaps the speed-up you remember had something to do with the switch-over from 16 bits to 32, which would have been the early to mid 90s. Or you're thinking of Microsoft's C compiler starting from Lattice C, back in the 80s before my time. There was also a lot of work done on pre-compiled headers to speed compilation in the latter half of the 90s (including some that I was responsible for).


I heard that early versions of C++ IntelliSense from Visual Studio used Edison Design Group's (EDG) front end. Is that true? No trolling here -- honest question. If yes, are they still using it now?


Not true by the time I retired in 2007, but I've got a vague memory of talking to someone on the C++ front-end team some time after that and EDG for IntelliSense being mentioned. So no idea if that's really true or not, and if so, whether that's true today.

I was heavily involved in the first version of C++ IntelliSense, roughly 1997?, and it was all home-grown. It was also a miracle it worked at all. I've blocked out most of the ugly details from my memory, but parsing on the fly with a fast enough response time to be useful in the face of incomplete information about which #if branches to take and, especially, template definitions was a tower of heuristics and hacks that barely held together. Things are much better nowadays with more horsepower available to replace those heuristics.


I was a teenager at that point. I learnt C in the early 90s and C++ after 96 IIRC. Didn’t start professionally in games until 2004 though!


> and didn't become popular for gamedev until the turn of the millenium

Wasn't this also because Microsoft had terrible support for C?

Since the mid-90's, a number of gamedevs moved to C++ but were unhappy with the results.. how OOP works, exception handling, the STL, etc.

My understanding is.. by late 90's.. many game developers, despite using C++, we still coding more inline with C programming than (proper) C++.

Mostly C code but using some features of C++ like, functions inside a struct, or using namespaces, that did not sacrifice compilation and runtime speed.


We wrote this in C++ (and assembler), but used only the most obvious language features. We laid down the first code in '95 or '96:

https://www.youtube.com/watch?v=9UOYps_3eM0


Yeah, gaming industry has become mature enough to build up its own inertia so it will take some time for new technologies to take off. C# has become a mainstream gamedev language thanks to Unity, but this also took more than a decade.


Comparing the time it takes for a prog language to spread from the 80s to today is a bad vantage point. Stuff took much longer to bake back then -- but even so the point is moot, as other commentors pointed out, it took off roughly the same amount of time between 2015 and today.


Hmm I don't agree. We're far away from the frantic hardware and software progress in the 80s and 90s. Especially in software development it feels like we're running in circles (but very, very fast!) since the early 2000's, and things that took just a few months or at most 2..3 years in the 80s or 90s to mature take a decade or more now.


The concept of AAA games didn't even exist back in 1985, very few people were developing games at that era, and even fewer were writing "complex" games that would need C++.

The SNES came on 1990 and even then it had it's own architecture and most games were written in pure assembly. The PlayStation had a MIPS CPU and was one of the first to popularize 3D graphics, the biggest complexity leap.

I believe your are seeing causation were only correlation should be given. C++ and more complex OOP languages just joined the scene when the games themselves became complex, because of hardware and market natural evolution


Many tried c++ in early 90s, but wasnt it too slow/memory intensive? You had to implement lots of inline c/assembly to have a bit of performance. Nowadays everything is heavily optimized, but back then not.


If you’re referring to game dev specifically, there have been (and continue to be) concerns around the weight of C++ exception handling, which is deeply-embedded in the STL. This proliferated in libraries like the EASTL. C++ itself however is intended to have as many zero-cost abstractions as possible/reasonable.

The cost of exception handling is less of a concern these days though.


Exception handling is easy enough to disable. Luckily, or C would probably still be the game developers go to.


Seems like a few contradictory ideas here. Rust is supposed to be a better safer C/C++.

Then lot of comments here that games are best done in C++.

So why can't Rust be used for games?

What is really missing beyond an improved ecosystem of tools. All also built on Rust.


> I'd expected some AAA title to be written in Rust by now.

Why? Those kinds of game engines are enormous amounts of code, and there's little incentive to rewrite.

I do strongly disagree that we aren't ever going to see large-scale game development in Rust; it just takes time. Whether games adopt an engine is largely about that engine's maturity rather than anything about the language. Bevy is quite young; 0.13 doesn't even have support for animation blending yet (I landed that for 0.14).


It was a few years back that the question came up to the developers of a Call of Duty title. "Is there still code from Quake 3 in COD?". They dodge around it by saying something like "we cannot deny this but e use the most appropriate tech where needed".

While not confirmation, I wouldn't be surprised if there is a few nuggets of Q3 in that code base still doing some of the basics. That would be really cool if it is true.

It seems like unless you are someone like John Carmack or most of Nintendo, game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.


A neat real-world example of ancient Quake code surviving to this day is visible in Valves games - the hardcoded patterns for flickering lights in Quake 1 survived into GoldSrc and then into Source and then into Source 2, most recently showing up in Half Life Alyx, 24 years on from their original appearance in Quake 1.

https://www.alanzucconi.com/2021/06/15/valve-flickering-ligh...

Basically all of the bigger systems will have been Ship-of-Theseus'd several times over by now, but little things like that can slip through the cracks.


That light flickering is quite cool, thanks for sharing. It reminds me of the Wilhelm scream, but on a much smaller scale of course.


> game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.

Bingo. Rust's biggest strength is correctness. But games aren't mission critical, and gamers are very tolerant towards bugs (maybe not on social media, but very few buggy games have had their sales impacted). Your biggest sale to AAA game devs are to engine programmers to minimize tech debt. But as we are seeing with the current industry, that's not exactly something companies care about until it's too late.

Then on the indie level we get articles like this. Half the article ultimately came down to "it's faster to break things and iterate than to do it right once". Again, similar lack of need for bug-free games. In addition, few indie games are scoped to a point where they need a highly disciplined ECS solution to scale with.

The author even criticizes the "tech specs" community part of rust gamedev. Different tools, diferent goals, different needs. IMO, I think Rust will help make for some very robust renderers one day, but ultimaely the scripting will be done on another language. Similar to how Unity uses C# scripting to a C++ engine, that they IL2CPP to bring back to a full C++ game.


This, exactly. As an embedded turned Unreal developer the first impression I had while using Unreal is how little concern for correctness there is overall. UB is used liberally, and there's clearly a larger focus on development speed and ease off use compared to safety and correctness. If a game has integer overflow or buffer overflows nobody cares. Viceversa, you need to keep the whole thing usable enough for the various 3D artists and such who have a hard time understanding advanced programming.


If that's the question... Let me assure you that there are decades-old pieces of code inside of, and used to assemble, many modern AAA games coming out of mature studios. The systems and tooling is typically carried forward. I don't think this is some big secret and you've intuited exactly the reason why:

> game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.


Not surprised at all that this stuff sticks around. I find it very endearing actually. Ain't broke, don't fix it!


A lot of big projects have amazing longevity to their older architectural decisions. Unreal still has a lot of stuff in it people that used UE1 would recognize, I did most of my professional development on UE3 and a bunch of that is still pretty recognizable. Similarly Chrome is a product of the time it was first created. And looking into the Windows source is probably like staring into the stygian abyss.

There is a lot of legacy and tech debt out there!


I remember years back someone form Microsoft calling the windows code base "The Abyss" because of how much technical legacy there was in it.

I think it was Steve Gibson who said that the Windows code base had some very questionable things in it. For instance they had work experience high school students working on code that made it into the final build that was less than spectacular. Like how Windows used to stall when you put a CD in and wouldn't proceed until the disc spun up and started reading data.

Windows 11 probably would still do that but I don't know because I don't have a disc drive any more.


It wasn't really windows lagging, it was explorer. There used to be more things in explorer that were blocked on something ultimately blocked by I/O.

This tends to not be the case so much any more, so I doubt it would happen today.

Instead you get the dreaded "Working on it....". It seem's like hard drives can be just as slow to spin up these days as CDs were back in the day.


Damn I forgot about explorer hanging when you put a CD in. That was especially terrible when you didn't have DMA


"I tend to agree about the "async contamination" problem. The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests. I've been pushing back against it creeping into areas that don't really need it."

100% this. As I say elsewhere in these threads: Rust is the language that Tokio ate. It isn't even just async viral-chain-effect, it's that on the whole crates for one async runtime are not even compatible with those of another, and so it's all really just about tokio.

Which sucks, if you're doing, y'know, systems programming or embedded (or games). Because tokio has no business in those domains.


It does in my domain of systems programming with async data handling. Tokio works like a dream - slipping into the background and just working so I can concentrate on the business logic.


This seems strange to me. If you don't have millions of concurrent requests to handle at the same time, why would you bother with a whole async framework? Just straight up spawning OS threads to do parallel work when you need it is both easier to reason about and does not mess with your program's stack.

Isn't the point of async/await that spawning OS threads is not scalable when you reach ridiculous numbers of simultaneous blocking I/O? It doesn't sound like you're really dealing with this sort of problem.


I know this is a late reply to your post, but your wording prompted a question. I will preface by saying this is not some sort of semantic flamebait, it is also not supposed to be a gatekeeping exercise. You state your domain is systems programming, but then talk about the event loop and scheduler for your program as ancillary details and say that your concentration is on business logic. I tend to view systems programming as development of things that have no business logic, because that is the domain of application programming. Also, I tend to think that a defining feature of systems programming is development that can not just accept a default solution to something as impactful as an event loop/scheduler/executer, but have to focus deeply on those aspects of a program that are the crux of its actual computational operation and interactions between those parts.

In the context of games, the systems programming is the renderer, audio engine, physics calculations, and things like a task system and dispatcher/scheduler, etc. As compared to the actual application specifics of levels, art, dialogue, interactions, UI, etc which to me are not systems programming.

With that said, how do you define systems programming? I’m really interested in how various devs tend to view the ‘cut-off’ between systems and application development. Sometimes I’m pretty sure I am on the extreme end of disjointness of the two and non-accepting of any ‘business logic’ type development qualifying as systems programming.

TL;DR - What is your definition of systems programming and do you include things like ‘business logic’ within that definition?


Even within what you discuss, things like renderers, audio engines and physics calculations have business logic, which I interpret as being the logic pertinent to their specific tasks, as opposed to support logic. Clearly these sorts of terms are heavily overloaded, so please don't too hung up the precise term I used.

That said, I think the view of systems programming is more relevant. My understanding is essentially the same as Wikipedia: "systems programming aims to produce software and software platforms which provide services to other software, are performance constrained, or both". I don't see business logic excluded from that definition.

For context, the area I use it is in direct interaction with an FPGA in the middle layer of a bigger system. The software acts as a performance critical controller of the FPGA and data marshalling system, controlling the DMAs and shunting the data into the network subsystem. Another bit of the system on different hardware then receives the data and does some performance critical signal processing before passing the result to the application layer. The "systems programming" stuff is responsible for translating high level application API commands into low level FPGA control and low level FPGA data and feedback into high level application structures.

Async works really well on the data handling. I have a full back pressure chain from the application, across the network, across the DMA subsystem right down to the FPGA. It also allows careful pinning of different tasks to different cores with pinned runtimes, which is important in maximising the network throughout on the resource limited cpu cores.

Rust async is great for this kind of stuff. I read a post a while ago, which I annoyingly can't find anymore, in which the author was using custom reactor and executor to hide cache latency. It was really beautiful and incredibly simple and annoyingly forgotten by me (!).


Disappointing to hear this after battling the same nonsense in JS for years.


It's just endemic to the industry. Framework-itis


Rust is a language made and used by Dunning-Kruger people who violently react to having to learn the prior art.

What did you really expect?


Rust's async/await design makes a lot of sense when you consider its primary goals (C interop, low level control, zero cost abstractions, etc.). Sure, perhaps most of us should be using a language with different constraints as opposed to Rust.


> The "async" system is optimized for someone who needs to run a very large web server,

Even there it's very problematic at scale unless you know what you're doing. async/await isn't zero cost, regardless of what people will tell you.


Absolutely. Async/await typically improves headroom (scalability) at the cost of latency and throughput. It may also make code easier to reason about.


I disagree with this, you're probably not paying much (if at all) in latency or throughput for better scaling.

What you're paying for with async/await is a state machine that describes the concurrent task, but that state machine can be incredibly wasteful in size due to the design of futures and the desugaring pass that converts async/await into the state machine.

That's why I said it's not "zero cost" in the loosest definition of the phrase - you can write a better implementation by hand.


That is true. Rust's async/await desugaring is still missing optimizations. I think that will be ironed out eventually. What mainly concerns me about async/await is that, even with Rust's best efforts, the baseline complexity will probably always be somewhat higher than for sync code. I will be pleased if the gap is minimized and people only need to reach for async when they want to. Right now, the latter isn't the case because of the "virality [of] function coloring".


Definitely makes code harder to reason about.


If you were to write the same code without using async you'd be trudging through a mess of callbacks and combinators. This is what writing futures code before 2018 was like. It was doable if you needed the perf but it sucked. Async is a huge improvement to readability and reasoning that we didn't have before.


No, actually that was just javascript. Programming environments with threading models don't have to live that way. Separate threads can communicate through channels and do quite well for themselves. See how it works is, you do something like let data = file.read(); and the it just sits there on that line until the read is done and then your data has the actual bytes in it and you just use them and go on with your life.


> you do something like let data = file.read(); and the it just sits there on that line until the read is done and then your data has the actual bytes in it and you just use them and go on with your life.

That's exactly how async/await works, except that it translates to state machines under the hood which gives you great performance. No need to mess with threading models, at all.


Yeah, Rust's async/await and lightweight threads are functionally very similar. Function coloring is a problem with async/awaitt, though (for now?).


Until you need cancellation


One rarely really needs that.


Maybe you are both right but your scales are orders of magnitude apart.


> at the cost of latency and throughput.

Compared to what?

Doing epoll manually?


A reactor has to move the pending task to some type of work queue. The task has to pulled off the work queue. The work queue is oblivious as to the priority of your tasks. Tasks aren't as expensive as context switching, but they aren't free either: e.g. likely to ruin CPU caches. Less code is fewer instructions is less time.

If you care enough, you generally should be able to outdo the reactor and state machines. Whether you should care enough is debatable.


The cache thing is a thing I think a lot of people with a more... naive... understanding of machine architecture don't clue into.

Even just synchronizing on an atomic can thrash branch prediction and L1 caches both, let alone working your way through a task queue and interrupting program flow to do so.


So yeah, you're thinking about the comparison between async/await and manual state machines management with epoll. But that's not what most people have in mind when you're saying async/await have performance impact, most of them would immediately think you're talking about the difference with threads.


If I'm not doing slow blocking I/O, I'm not doing epoll anyways.

But the moment somebody drops async into my codebase, yay, now I get to pay the cost.


Either you are doing slow IO (in some of your dependency) or you don't have anyone dropping async in your code though…


Threading, probably.


Async/await isn't related to threading (although many users and implementations confuse them); it's a way of transforming a function into a suspendable state machine.


Games need async/await for two main reasons:

- coding multi-frame logic in a straightforward way, which is when transforming a function into a suspendable state machine makes sense

- using more cores because you're CPU-bound, which is literally multithreading

Both cases can be covered by other approaches, though:

- submitting multi-frame logic as job parameters to a separate system (e.g., tweening)

- using data parallelism for CPU-intensive work


I know. But threading, and earlier processes, were less scalable but potentially faster ways of handling concurrent requests.


It's also much easier to reason about, since scheduling is no longer your problem and you can just write sequential code.


That's one way to see it. But the symmetric view is equally valid: async await is easier to reason about because you see were the block points are instead of having to guess which function is blocking or not.

In any case you aren't writing sequential code, it's still concurrent code, and there's a trade-off between the writing simplicity of writing it as if it was sequential code, and the reading simplicity of having things written down explicitly.

This “write-time vs read-time” trade of is everywhere in programming BTW, that's also the difference between error-as-return-values and exception, or between dynamic typing and static one for instance.


I don't think so, because there isn't a performance drawback compared to threads when using async. In fact there's literally nothing preventing you from using a thread per task as your future runtime and just blocking on `.await` (and implementing something like that is a common introduction to how async executors run under the hood so it's not particularly convoluted).

Sure there's no reason to do that, because non-blocking syscalls are just better, but you can…


> I don't think so, because there isn't a performance drawback compared to threads when using async.

There is. When you write async functions, they get split into state machines and units of non-blocking work which need to be added and taken from work queues. None of this has to happen if you just spawn an OS thread and tell it "execute this function". No state machine, no work queue. It's literally just another sequential program that can do blocking I/O independently of your main thread.

If you insist on implementing a thread-based solution in exactly he same way that an async solution would, then yes they'll both pay the price of the convoluted runtime. The point is, there's no need to do that.


Threading is compatible with async


"threading alone" as in a thread per request.


> I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

The main reason is that you can't ship that Rust code on PS5 in a sensible manner. People have tried, got useless toys to compile, but in the end even Embark gave up. I remember seeing something from them that they had moved Rust to server-only.


> The main reason is that you can't ship that Rust code on PS5 in a sensible manner.

Really - why’s that?


Sony requires that you use their tooling, which you can only get under NDA.


If there was significant pressure from developers Sony would allow Rust. I doubt there is any.


It's a catch 22 - you can't deploy Rust so no one uses Rust for anything, no one uses Rust for anything so there is no reason for Sony to work on Rust deployment.

I think it would be a really good fit for certain parts of the engine - serialization code especially. We have massively complicated C++ code parsing network packets and all sorts of similar sketchy things, always scares me when I see it.


Really a shame that there's that sort of thing going on in 2024 too.


I remember a meeting of local gamedevs with sony in '95. A guy at the back piped up with "So when will the C++ compiler be ready? We've written our whole game in C++".

Crickets. Two Sony dudes at the front look at each other like, "You tell him".

IMHO Rust is the wrong language for game development. But so is C++ TBH.


> I tend to agree about the "async contamination" problem.

Argh I have the same issue. Sure if you write JS or Python you probably need async. My current Java back end that has like 5 concurrent users does not need async everything making 10x the complexity.


> I'd expected some AAA titles to be written in Rust by now.

"AAA" titles are huge and/or high dev budgets. Even if a game is "starting from scratch" the engine development team are still likely taking code from previous projects to get started. Of course there are other factors. It could be a BIG RISK to move to another programming language when the team, despite frustrations, are already familiar with something else... like the perks C++ brings (you learn from trial-and-error)

Could you imagine learning Rust as-you-go... building a AAA title... and fighting the compiler? To me it is a huge risk!

That is my opinion.. but I am sure others will disagree. If there is anyone on (or did) a AAA title with Rust... I would be happy to hear more about it.

I am not saying it will never happen. Maybe a AAA title is currently in development in Rust. I honestly dont know. However, game developers... if they are looking into Rust... are also looking at Odin, Jai, or Zig. For gaming, I think they are better alternatives than Rust but (again) that is my opinion.

Now for smaller, indie games - the possibility of moving to Rust (or another language) is more likely. Likely a fair percentage have moved away from C++ now.


> * There are very few people doing serious 3D game work in Rust. There's Veloren, and my stuff, and maybe a few others. No big, popular titles. I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

At one point the studio behind the Finals was writing game server code in Rust with an Unreal engine client. Not sure if that's true still


The studio you're talking about is Embark studios, and is openly pretty big on Rust [1] I think it was rumored that their next project will use a Rust game engine, but I am not sure how it's going now.

[1] https://github.com/EmbarkStudios/rust-ecosystem


Their creative sandbox project is full Rust from client to server I believe. I haven't kept up with it after trying the closed alpha a while ago but it looks like it's still going, and has a name now: https://wim.live

It's still only listed as coming to PC, Mac, Linux and Android so I guess they haven't broken through the barrier of shipping Rust on consoles.


Backend 3d code?


I'm not familiar with the domain, but wouldn't 3D collision checking be considered backend 3D code? Even if it's not rendered, it still needs to be calculated.


Server side rendering for games.


That's a thing?


Absolutely! Any sort of multiplayer game needs a source of authority if you want to prevent cheats like a hacked client lying about its position, and a really good way to do that is load the geometry of your level and run physics checks server side at a lower frequency than once per frame. Godot and Unity both support headless builds for exactly this reason, it's basically the whole game engine, minus the renderer, audio, and UI systems, usually.


That is not server side rendering. Per your own comment:

> minus the renderer

(Otherwise you are completely correct.)

Closest I can think of is server side ragdolls that are rendered the same on all screens and similar stuff.


Yep, Stadia might have failed, but GeForce Now and XBox Cloud Gaming have enough customers to keep them going.


That’s complete different. They are rendering the client and streaming it to users. That doesn’t make the client side code “server side” any more than you streaming Fortnite on Twitch does.


Nope, XBox XDK has facilities for code to be aware of rendering server side.


> The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests.

Can you please elaborate on this? I see a lot of similar concerns in other contexts too. Linux kernel's scheduler for example. Is it a throughput/latency tradeoff?


The current popularity of the async stuff has its roots in the classic "c10k" problem. (https://en.wikipedia.org/wiki/C10k_problem)

A perception among some that threads are expensive, especially when "wasted" on blocking I/O. And that using them in that domain "won't scale."

Putting aside that not all of use are building web applications (heterodox here in HN, I know)...

Most people in the real world with real applications will not hit the limits of what is possible and efficient and totally fine with thread-based architectures.

Plus the kernel has gotten more efficient with threads over the years.

Plus hardware has gotten way better, and better at handling concurrent access.

Plus async involves other trade-offs -- running a state machine behind the scenes that's doing the kinds of context switching the kernel & hardware already potentially does for threads, but in user space. If you ever pull up a debugger and step through an async Rust/tokio codebase, you'll get a good sense for what the overhead here we're talking about is.

That overhead is fine if you're sitting there blocking on your database server, or some HTTP socket, or some filesystem.

It's ... probably... not what you want if you're building a game or an operating system or an embedded device of some kind.

An additional problem with async in Rust right now is that it involves bringing in an async runtime, and giving it control over execution of async functions... but various things like thread spawning, channels, async locks, etc. are not standardized, and are specific per runtime. Which in the real world is always tokio.

So some piece of code you bring in in a crate, uses async, now you're having to fire up a tokio runtime. Even though you were potentially not building something that has anything to do with the kinds of things that tokio is targeted for ("scalable" network services.)

So even if you find an async runtime that's optimized in some other domain, etc (like glommio or smol or whatever) -- you're unlikely to even be able to use it with whatever famous upstream crate you want, which will have explicit dependencies into tokio.


> If you ever pull up a debugger and step through an async Rust/tokio codebase, you'll get a good sense for what the overhead here we're talking about is.

So I didn't quite do that, but the overhead was interesting to me anyway, and as I was unable to find existing benchmarks (surely they exist?), I instructed computer to create one for me: https://github.com/eras/RustTokioBenchmark

On this wee laptop the numbers are 532 vs 6381 cpu cycles when sending a message (one way) from one async thread to another (tokio) or one kernel thread to another (std::mpsc), when limited to one CPU. (It's limited to one CPU as rdtscp numbers are not comparable between different CPUs; I suppose pinning both threads to their own CPUs and actually measuring end-to-end delay would solve that, but this is what I have now.)

So this was eye-opening to me, as I expected tokio to be even faster! But still, it's 10x as fast as the thread-based method.. Straight up callback would still be a lot faster, of course, but it will affect the way you structure your code.

Improvements to methodology accepted via pull requests :).


I'd want to see perf stats on branch prediction misses and L1 cache evictions alongside that though. CPU cycles on their own aren't enough.


It doesn't seem my perf provides metric for L1 cache evictions (per perf list).

Here's the results for 100000 rounds for taskset 1 perf record -F10000 -e branch-misses -e cache-misses -e cache-references target/release/RustTokioBenchmark (a)sync; perf report --stat though:

async

    Task 2 min roundtrip time: 532
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0,033 MB perf.data (117 samples) ]

    ...    
    branch-misses stats:
              SAMPLE events:         54
    cache-misses stats:
              SAMPLE events:         27
    cache-references stats:
              SAMPLE events:         36
sync

    Thread 2 min roundtrip time: 7096
    [ perf record: Woken up 5584 times to write data ]
    [ perf record: Captured and wrote 0,367 MB perf.data (7418 samples) ]

    ...
    branch-misses stats:
              SAMPLE events:       6577
    cache-misses stats:
              SAMPLE events:        159
    cache-references stats:
              SAMPLE events:        682


Interesting. Thing is all you're benchmarking is the cost of sending a message on tokio's channels vs mpsc's channels.

It would be interesting to compare with crossbeam as well.

But not sure this reflects anything like a real application workflow. In some ways this is the worst possible performance scenario, just two threads spinning and spinning at the fastest speed they can, dumping messages into a channel and pulling them out? It's a benchmark of the channels themselves and whatever locking/synchronization stuff they use.

It's a benchmark of a "shared concurrent data" situation, with constant synchronization. What would be more interesting is to have longer running jobs doing some task inside themselves and only periodically (ever few seconds, say) synchronizing.

What's the tokio executor's settings by default there? Multithreaded or not? I'd be curious how e.g. whether tokio is actually using multiple threads or not here.


Actually I wasn't that interested in throughput, only the latency in terms of instructions executed since sending until it is received, though indeed the throughput is also superior with tokio.

For most applications this difference doesn't really matter, but maybe some applications do a lot of small things where it does matter? In those cases it might be an easy solution to switch from standard threads to tokio async and gain 10x speed, as the structure of the applications remains the same.

> It's a benchmark of the channels themselves and whatever locking/synchronization stuff they use.

Yeah, in retrospect some mutex-benchmark might be better, though I don't expect a message channel implemented on top of that is noticeably slower. A mutex benchmark is probably easier to get wrong..

> What would be more interesting is to have longer running jobs doing some task inside themselves and only periodically (ever few seconds, say) synchronizing.

I don't quite see how this would give any different results. Of course, in that case the time it takes to transmit the message would be completely meaningless.

> What's the tokio executor's settings by default there? Multithreaded or not? I'd be curious how e.g. whether tokio is actually using multiple threads or not here.

It's using the multithreaded executor. I tried the benchmark with #[tokio::main(worker_threads = 1)] and 2 and while with =1 the result was 529 but with =2 it was 566.


> Putting aside that not all of use are building web applications

Perfect moment to mention "rouille" which is a very lightweight synchronous web server framework. So even when you decide to build some web application you do not necessarily have to go down the tokio/async route. I have been using it for a while at work and for private projects and it turned out to be pretty eye-opening.


Hit the nail on the head.

Unless you're really dealing with absurd numbers of simultaneous blocking I/O, async has entirely too many drawbacks.


>now you're having to fire up a tokio runtime

I've been developing in (mostly async) Rust professionally for a about a year -- I haven't written much sync rust other than my learning projects and a raytracer I'm working on, but what are the kind of common dependencies that pose this problem? Like wanting to use reqwest or things like that?


> Like wanting to use reqwest or things like that?

Yes. Reqwest cranks up Tokio. The amount of stuff it does for a single web request is rather large. It cranks up a thread pool, does the request, and if there's nothing else going on, shuts down the thread pool after a while. That whole reqwest/hyper/tokio stack is intended to "scale", and it's massive overkill for something that's not making large numbers of requests.

There's "ureq", if you don't want Tokio client side. Does blocking HTTP/HTTPS requests. Will set up a reusable connection pool if you want one.


reqwest also has a blocking version, which I use in projects not already using an async rt

https://docs.rs/reqwest/latest/reqwest/blocking/index.html


The blocking implementation still depends on and uses tokio, last I looked.

I've seen this with multiple Rust packages. "Yes, we offer a synchronous blocking version..." and then you look and it's calling rt.block_on behind the scenes.

Which is a pretty large facepalm IMHO


You don't have to do that, Tokio also provides a single-threaded runtime that just runs async tasks on the main thread.


I'm happy to see someone still doing some work in second life.


There's a lot going on. Someone is doing a new third party viewer, Crystal Frost, in Unity. Linden Lab has a mobile viewer in alpha test. Rendering is PBR now for new objects. There are mirrors! Content upload is moving to glTF, to be compatible with everybody else. Voice is switching from Vivox to WebRTC. Game controller support is in test. New users get better avatars. The dev staff is larger.

None of this is yet increasing Second Life usership much, but it remains the best metaverse around.

I thought the metaverse thing was going to be bigger. Meta spent so much money to produce so little.


> There's a lot going on.

I'd like to use the opportunity to ask: What happened during the covid pandemic? I haven't heard/read anything about second life during the pandemic even though this was probably a once-in-a-lifetime opportunity?

Are there any news sources that you can recommend to keep an eye on second life, because it doesn't seem that it gets that much press coverage?


> What happened during the COVID pandemic?

Usage went up about 10%, and then leveled off. Logged in right now, at 0020 PDT: 32084 users. Varies between 30,000 and 50,000 around the clock.

> News sources

* https://modemworld.me/

* https://ryanschultz.com/


As a game developer for about two decades, I've never considered Rust to be a good programming language choice.

My priorities are reasonable performances and the fastest iteration time possible.

Gameplay code should be flexible, we have tons and tons of edge cases _by design_ because this is the best way to create interesting games.

Compilation time is very important, but also a flexible enough programming structure, moving things around and changing your mind about the most desirable approach several times a day is common during heavy development phases.

We almost never have specifications, almost nothing is set until the game is done.

It is a different story for game engines, renderers, physics, audio, asset loaders etc. those are much closer to system programming but this is also not where we usually spend the most time, as a professional you're supposed to either use off-the-shelf engines or already made frameworks and libraries.

Also, ECS is, IMHO, a useful pattern for some systems, but it is a pain in the butt to use with gameplay or UI code.


> It is a different story for game engines, renderers, physics, audio, asset loaders etc. those are much closer to system programming but this is also not where we usually spend the most time, as a professional you're supposed to either use off-the-shelf engines or already made frameworks and libraries.

But this is where industry interest (the little there is) lies for Rust, is it not? This is what the AAA studios that are researching and prototyping are working on.

C++ is not a popular language to implement the actual game in for all the reasons you list. It is too slow to compile and too rigid. The people who actually build the games, make them tick, are all working in visual scripting languages.


Visual scripting languages are easy to use and practical for low-complexity code, but they scale very poorly once the complexity increases.

Gameplay code is still better written with code, C# or C++ or sometimes Lua.


> visual scripting languages

I'm surprised no one has made such a language that is designed from the ground up to be used as such for rust. Nim/coffeescript come to mind, but they target non-rust languages. Lua would be close enough if it weren't so alien to everything people like about rust.


Someone did actually create a scripting language specifically designed to work with rust: https://rhai.rs/


As a non-game dev who uses Rust and Elixir, Rust wouldn't be my first pick for a large gamedev studio for multiple reasons. As for alternatives worth evaluating: Crystal, Cython (compiled Python), or Nim could result in increased gamedev productivity over C++ or C#. Maybe even Go because the iteration and compile times are very fast, and the learning curve is very low.


Often in the past Lua has been used and in my experience it's been quite nice. It's very easy to bind, there's some nice editors out there and the performance is decent.

There's some other game-specific scripting languages that have popped up (angelscript and wren come to mind but there's more). I've not used them in full production products though. Mostly just kicked the tires.

Now that I think about it though, it's been almost 6 years since I've worked on an engine with lua support. Mainly because in the last few years I've been working with unity or unreal.


| Go because the iteration and compile times are very fast

Safety is important and for certain applications, Rust is unrivaled.

But for games, like web apps, where time to market and innovation can be just as if not more important than being free of runtime errors, Go is more suited to rapid development than Rust on compile times alone.

Of course, the libraries and support for both aren't quite there yet, so at this point neither is well suited to game dev.


I agree. We almost have a paradox of choice nowadays because it's easier than ever to create new language platforms. Rust is something different because its thesis is safety and performance by default, more or less optimized for systems development primarily, but at the bargain of making dangerous things more complicated to accomplish somewhat intentionally. Unconventional languages are sometimes used as a conspicuous challenge to attract developers or to attempt to move some parts of an industry into new territory.


> Cython (compiled Python), or Nim could result in increased gamedev productivity over C++ or C#

If you're starting from scratch, then maybe. Having had to crash learn games dev (ex VFX systems person) Unity + c# is just so nice to use. most of the easiness of python, but with proper strict typing. (which you can turn off, if you want)

plus the wealth of documentation, its great. I imagine unreal is quite good in that regard too.


>As for alternatives worth evaluating: Crystal, Cython (compiled Python), or Nim could result in increased gamedev productivity over C++ or C#.

I read on a recent HN thread that Crystal compilation is slow due to its type inference, IIRC.


Does Crystal support Hot Reloading? The slow compilation speed is a non-starter for me.


Gamedev industry already settled on almost perfect language for this task (C#) so there is little profit in trying to reinvent the wheel.

And by perfect I mean not the way Unity uses it but the way pure C# engines use it.


They have an interpreter mode now that is quite good and should be well-suited for these situations


Crystal doesn't support parallelism[1], which is a dealbreaker in this context (and for performance sensitive programs in general).

[1]=production grade; additionally, it seems that no work has been done on it for years.


Haha. Nope. Maybe Nim, V*, Go*, or Elixir would be a better choice for such a use-case.

* So fast, they really don't need HCR.


HCR provides changing things while th game is running in it's current state. fast recompile does not.

Start game, wait for engine to initialize, select level, wait for it to load, move player or camera to desired location. Now iterate on something at that location via HCR. If you have to recompile and restart the game you're not going to have fast iteration



I haven't tried it yet but I've wondered if Elixir might be a good choice for a game server with many concurrent players.


Definitely and for chat.

BEAM/HiPE VM allows native linking using NIFs so it's possible to integrate Erlang or Elixir with C-compatible projects for critical code sections, library interfacing, and perhaps even the majority of a performance-critical game engine as native code. Rustler also exists to write NIFs in Rust. Recall how VMware ESXi core tech was implemented mostly as Linux kernel modules and heavily-modified Linux to turn it inside-out as a type-1 hypervisor.


Go is infamous for its gc latency spikes, which is the thing that games cannot tolerate.

Though 1.18 helped a lot, you'd have to do some major persuasion to game devs that Go's gc is the kind of thing they'd want in their game.

---

EDIT: Not sure the downvote, Go is know for its (historically at least) unsuitability for RTC or game dev.


I’ve heard that go has very low latency gc, i haven’t heard of it having spikes


The problem with Go is its inadequate FFI, which is important for gamedev which tends to be FFI and syscall-heavy due to embedding another gamescript language and/or calling into underlying rendering back-end, sometimes interacting with input drivers directly, etc.

Which is why C# has been chosen so often (it has performance not much worse than C++ (you can manually optimize to match it), zero or almost zero-cost FFI, and can also be embedded, albeit with effort).

There are also ways to directly reduce GC frequency by writing less allocation-heavy code, without having to resort to writing your own drop-in GC implementation (which is supported but I haven't seen anyone use that new API just yet aside from a few toy examples, I suppose built-in GC is good enough).


The overhead for Go in benchmarks is insane in contrast to other languages - https://github.com/dyu/ffi-overhead Are there reasons why Go does not copy what Julia does?


Go has non-native stack and has to perform stack switching among other things (hopefully someone with more knowledge than bare minimum required to criticize Go can chime in :D)

p.s.: mono seems to produce quite a bad result vs .net 6/7/8 huh, time to make a PR


Your comment is down voted because it is false. Go is not "infamous" for gc latency spikes.


It is probably true for game engine dev, but not generally true for game dev, which is a vast field and not as computationally demanding as many imagine. I believe Go's unwillingness to be less strict about some (non-type) semantics would be a bigger problem for game devs than GC.


That's true. Go ain't C4 (JVM), ORCA (Pony), HiPE (Erlang/OTP BEAM), or CLR (C#). The JVM and CLR runtimes have been beaten on for years at immense scale in server-side business settings. I wished Go supported embedded work (without a GC), had an alternative allocator a bit more like Erlang's, and had alternative implementations that transpiled to other languages, but it doesn't. Ultimately, I left when zillions of noobs poured in because it was seen as "easy" and started wasting my time rather than searching for answers themselves.

If performance were such a huge concern, I don't see any valid resistance to Rust that completely lacks a GC and makes it easy to call C code other than "it's something different", "there's too much hype", or "I don't like it". Recent development tools like RustRover make is really damn easy to see whats a move value or a borrow, debug test cases, run clippy automatically, and check crates versions in Cargo.toml. Throw Copilot in there and let it generate mostly correct, repetitious code for you.


I had similar thoughts, about Rust being a good match for game engines but not games. Maybe it suggests Rust game engines might want to include an interpreter for some higher level language to actually do the gamedev in.

Rust is pretty good for writing PL interpreters (and similar tooling) too, actually.


I know you're not asking for recommendations, but Lisp, particularly SBCL, really seems to check all your boxes. I say this as someone who generally reaches for Scheme when it comes to Lisps too.

There are a few game engines[0] for CL, but most of them seem to be catered specifically to 2D games.

[0] https://github.com/CodyReichert/awesome-cl?tab=readme-ov-fil...


> a flexible enough programming structure, moving things around and changing your mind about the most desirable approach several times a day is common during heavy development phases.

That's the kind of code for which Rust-like languages shine. Rich type systems make it easy to change your mind about things and make large changes to your code with confidence.

(Whether Rust tooling is actually at a level to take advantage of that is another question)


> That's the kind of code for which Rust-like languages shine. Rich type systems make it easy to change your mind about things and make large changes to your code with confidence.

I don't think this is true. Rust makes it easy to get the refactor right (generally speaking 100% right). But that's not what they're describing. They're describing where the ability to make the refactor fast, even if it doesn't work correctly (in the formal sense of correctly). That is to say, memory leaks and race conditions and all sorts of horrible nastiness may be tolerable during the dev process in exchange for trying out an idea more quickly.

This is, of course, significantly more work at the end to patch up all of the things you did, but if you don't have to do the full work on 99/100 iterations, or got to try out more iterations because of the quick turnaround time, that would be considered a win here.


Pretty much every compiled language with a static typesystem has that "large-scale refactoring support" though. That's not Rust's USP, on the contrary: a too strongly typed language can make refactoring actually harder than it needs be. The sweet spot is somewhere in the middle (where exactly is up for discussion of course).


>Rich type systems make it easy to change your mind about things and make large changes to your code with confidence.

To be fair, they need to be able to make large changes with confidence because what would be small changes in other languages tend to end up being very large changes in rust like languages.


> Also, ECS is, IMHO, a useful pattern for some systems, but it is a pain in the butt to use with gameplay or UI code.

Not a game developer, but each time I tried to make one not using ECS(or something at least similar in spirit) I quickly found myself not being able to proceed due to the sheer mess in the codebase.

How does one normally avoid that?


> Also, ECS is, IMHO, a useful pattern for some systems, but it is a pain in the butt to use with gameplay or UI code.

I'd love to see a language built around ECS. I wonder how nice it can be in a language syntax where ECS is the easiest thing you can do.


> My priorities are reasonable performances and the fastest iteration time possible.

I bought Mount & Blade II Bannerlord in 2020-03-30. I love it to death, but come on...

  // 2024-02-01
  $ curl https://www.taleworlds.com/en/News/552 | grep "Fixed a crash that" | wc -l
  29

  // 2023-12-21
  $ curl https://www.taleworlds.com/en/News/549 | grep "Fixed a crash that" | wc -l
  6

  // 2023-12-14
  $ curl https://www.taleworlds.com/en/News/547 | grep "Fixed a crash that" | wc -l
  101
Maybe feeling like you're iterating fast isn't the same as getting to the destination faster.

Edit: Lol guys calm down with the down-vote party. I was counting crashes, not bugs:

  $ curl https://www.taleworlds.com/en/News/547 | grep "Fixed a bug that" | wc -l
  308
Does your C++ not crash, just theirs?


That game (currently) has 88% positive reviews on steam and a 77 metacritic score with over 15.5k people playing the game right now (according to steamcharts.com)

Thats a lot of happy customers.


I can't really comment on the quality of the game or experience or how buggy it feels because I've never played it, but I will say that counting fixed crash situations is a somewhat arbitrary and useless metric. If each of those crashes affected and was reported by a single person or even nobody because no regular person could really encounter it is a vastly different situation than if each of those crashes was experienced by even 1% of the users.

The criteria by which something is decided to mention in the patch notes is not always purely because the users care. Sometimes it's because the developers want to signal effort to user and/or upper management.

Maybe Mount and Blade was super boggy in the past and is still super buggy now so all the crashes fixed are just an indicator of how large the problem is for them and how bad the code still is. I dunno, you didn't really give any information to help on that front.


Mount & Blade 2 was released very early and despite constant improvement (they keep patching it at a strong pace), it's only slowly evolving.

It was even downright unfinished on release, with many game systems claiming to be doing something actually being simply unimplemented.

But despite all that it was and is still fairly playable and enjoyable, even at release. A game only needs a great core gameplay loop to succeed, even if large parts of it are completely broken.

Interestingly, Taleworlds make their own engine with fairly unique capabilities. 200 players can fight in fast paced, precise melee combat on a single server. Even more than in fast-paced shooters, it can be extremely frustrating for players when the game doesn't behave in exactly the way that you would expect (for example, standing undefended just a few centimeters away from the reach of an opponent's swing, or relying on interrupting their attack with your own landing 100 milliseconds before). They've made their own scripting language for everything related to policy. So this scripting language is what modders interact with. And it is absolutely atrocious as a language, but it serves the purpose well enough.


> If each of those crashes affected and was reported by a single person or even nobody

Then do you really think they'd be spending time fixing it?

(Actually, you know what, they probably would.)


That's why I had a paragraph mentioning different reasons things might be mentioned. I don't think it's uncommon to find a bug that could cause a crash while working something else, confirm it does crash, and then fix it. If the culture is to mention those things in patch notes even if you're not sure it actually ever caused a user problem, then it will be listed.

That doesn't mean all, or even any, of the listed crashes were like that, but it does illustrate that it's hard to know what they actually mean without additional info.

(for what it's worth, I'm a long time Tarkov player, so I'm definitely familiar wroth buggy games and apparent development problems with rushing, so this is more a devils advocate position on my part)


With Rust and the exact time iteration times, management and deadlines, you end up with the same amount, just theyre panic!() instead. Thats an improvement, sure, but its fighting a symptom.


There are a bunch of useful clippy lints to completely disable most forms of panicking in CI. We use this at my work since a single panic could cost millions of $ in our case.


With modern languages that take safety more seriously, it's a lot easier to spot places where the code 'goes wrong'.

In an older language, you have nothing to tell you whether you're about to dereference null:

   foo.bar.baz = ...;
Even if you've coded it 100% correctly, that line of code still looks the same as code which will segfault. You need to look elsewhere in codebase to make sure the right instructions populated those fields at the right time. If I'm scrolling past, I'll slow down everytime to think "Hey, will that crash?"

Compare that with more safety focused languages where you can see the null-dereferences on the page. Unwrap() or whatever it is in Rust. Since they're visually present, you can code fast by using the unsafe variants, come back later, and know that they won't be missed in a code review. You can literally grep for unsafe code to refactor.


I love Rust, but a crashing released game is better than a half-finished "perfect" game, or a game where you couldn't iterate quickly, and ended up with a perfectly tuned, unfun game.


> a crashing released game is better than a half-finished "perfect" game

For who? I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long.


> For who? I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long.

Evidence suggests otherwise. Of all demographics, gamers appear to be the most tolerant of buggy software.

I'm playing a 2020 game right now that has (in about 30 hours of gameplay):

1. Crashed twice 2. Froze once 3. Has at least ONE reproducible bug that a player would run into at least once every mission (including the first one).

Since this game is now so old it's not getting any more patches, these bugs are there for all eternity, because they just do not move the needle on enjoyment by the gamer.

Searching forums for Far Cry 5 Bugs gives results like this: https://www.reddit.com/r/farcry/comments/1ai4jzx/has_far_cry...

Gamers just don't care about bugs unless it stops them playing the game at all!

In order for bugs to have an effect on gamer enjoyment, it literally needs to make the game unplayable, and not just make the player reload from the last savepoint.


> Evidence suggests otherwise. Of all demographics, gamers appear to be the most tolerant of buggy software.

Evidence suggests otherwise. Of all demographics, game studios appear to be the most tolerant of buggy software. bladeblablabla

Just go look at CP2077 or BF2042 or Fallout 76 or ...

So many games out there that no one wanted to play until they finally actually made a game that was ready for release, a year or more after they released it.


> 1. Crashed twice 2. Froze once 3. Has at least ONE reproducible bug that a player would run into at least once every mission (including the first one).

Sounds about on par even for enterprise software, in cases where shipping quickly is prioritized over overall quality, doubly so for gamedev which is notorious for long hours and scope creep.


The problem is we would have a lot less games and the games we would get would not be as fun. Rust appears to have the following problems:

1) As the article pointed out, game developers are less productive in Rust. This is a huge problem.

2) Game budgets are not going to get bigger. This means that if Rust reduces productivity, games are going to be less polished, less fun, etc. if they are written in Rust.

3) Game quality is already fine. 99% of the games I play have very few noticeable bugs (I play on an Xbox Series X). Even the games with bugs are still fun.

Basically, gamers are looking for fun games which work well. They are not looking for perfect software which has no bugs.


> As the article pointed out, game developers are less productive in Rust. This is a huge problem.

I don't think it's limited to just game developers though. Unless you are writing something in which any GC time other than 0ns is a dealbreaker, and any bug is also a dealbreaker, you're going to be less productive in Rust than almost any other language.


Oh, come on, we're yet again extrapolating from "Rust is bad at rapid iteration on an indie game" to "Rust is bad at everything". If Rust were really that astoundingly unproductive of a language, then so many developers at organizations big and small wouldn't be using it. Our industry may be irrational at times, but it's not that irrational.


> Oh, come on, we're yet again extrapolating from "Rust is bad at rapid iteration on an indie game" to "Rust is bad at everything".

I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

I didn't think that you are disputing this claim; if you are disputing this, I'd like to know why you think otherwise.


> I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

It depends what you measure

For software that must get it right Rust can be more productive. The early cycles of development are slow, especially for people who have not surrendered to the borrow checker, yet. But the lack of simple mistakes, or more accurately the compiler's early detection of simple mistakes dramatically speeds up development

But in a lot of software those mistakes, whilst important, will not "crash the aeroplane ", so it is not worth that extra cost in the early cycles

I am not a game developer, or player, but games are in that category I think


> I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

That's not what you said: you said you're going to be less productive in Rust than nearly any other language, not "mainstream GC'd languages".

> I didn't think that you are disputing this claim; if you are disputing this, I'd like to know why you think otherwise.

Depending on the domain, I am disputing that, because of things like the Cargo ecosystem, easy parallelism, ease of interop with native code, etc. There is no equivalent to wgpu in other languages, for example.


> That's not what you said: you said you're going to be less productive in Rust than nearly any other language, not "mainstream GC'd languages".

I feel that you're selectively reading only what you have talking points to respond to.

Here is exactly what I said:

> Unless you are writing something in which any GC time other than 0ns is a dealbreaker, and any bug is also a dealbreaker, you're going to be less productive in Rust than almost any other language.

I mean, I literally carved out an exception use-case for Rust; viz for software that can't handle GC.

I wrote a single sentence with a single point, not a a single point diluted over multiple paragraphs. You have to literally read only half-sentences to interpret my point the way you did.

If you aren't going to even bother reading full sentences, why bother engaging at all?


Would "you're going to be less productive in Rust than nearly any other language unless GC time or any bug are dealbreakers" be a fair summary of what you mean?

Either way, I fully disagree with that. Many more traits of Rust may make it a better choice even if the low productivity claim was true:

- integration with other languages - I know of companies successfully developing a single Rust library and just using thin wrappers for other languages they need support for

- data races detected at compile time - in highly concurrent applications being able to catch data races at compile time is huge. Please take a look at a blog post from the Uber team[1]. A dedicated team investigated 1100 data race occurrences. Data races may lead to bugs that are a PR nightmare for companies, like a bug in GitHub that sometimes resulted in a user being logged in to an account of another user[2].

- Embedded systems

- WASM - there are not that many languages that natively compile to WASM and have good tooling around it. For most GCed languages you have to go for "close enough" alternatives like TinyGo or AssemblyScript or use tools that bundle an entire interpreter in a WASM binary

But even outside these categories, I don't think it's universally true Rust is less productive than alternatives and my experience shows me otherwise. For example, in many domains, you don't care about the borrow checker and lifetimes almost at all. Take a look at a Todo Backend[3] I wrote in Rust[4]. If you take a look at one of the Go implementations of the same thing, you wouldn't probably see much of a difference because of the nature of web backends: you get some data in, you process the data, usually making some database queries, you return some data (or not).

What with stateful applications without a database, though? Surely that must be hell? Even here it's not as black and white as you would like to see it. When I was working at Hopin (once upon a time a unicorn startup scaling extremely fast) we had to implement a presence server - a service holding information on who is online and what event they're attending, which video they're watching etc. Nothing too complex, but we had a requirement to hold up to 100k open connections, and at the time we didn't have any infrastructure for that (most of the stack was Node.js and Rails). Someone wrote a proof of concept in Go using Redis as a backend with a queue and using Redis for leader election with a big caveat - each of the nodes had to process all of the queue items, so we were limited by a size and processing speed of a single Redis node.

When the time came to implement the production version I said: let's treat the application as a database. We cared only about current data. If the application failed, we could restart and clients would reconnect. If we wanted to have a history of presence we could push all of the events to Kafka or another queue, but still mostly use in-memory data for real-time needs.

I had some Rust exposure before, but it was my first production app. I was also joined by a person who had never written Rust before. In two weeks we had a working application while I was also making sure the other programmer codes as much as possible and doing a lot of pair programming. We deployed it shortly after. Then we added a few more features in the next two weeks or so.

The code was extremely simple - more or less a few hashes behind a WebSocket based API. As all of the data was living through the entire lifetime of the application we didn't have to care about borrow checker or lifetimes. We had an actore-like code - a few threads with each thread holding a data structure and a few channels that send commands. We were moved to other projects, so the presence server became unmaintained and even then it was working without any issues whatsoever for the next half a year or so. Then there was a big push to scale all of the services to handle a minimum 500k concurrent users, ideally a million. The Rust app didn't need almost any changes, after some kernel and load balancer tune up, it could handle up to 2 million connections frequently sending events on a single machine. If we wanted to, we could easily shard it, but there was no need.

The push to go more into real-time features was deprioritized by then, though, so the management said the app has to be rewritten to Node.js. There was one try to do that, which failed after two months or so. This is not to say you can't make an application like that in Node.js. You can, but you can't use the same architecture, cause you can't multithread Node.js applications, thus you have to run multiple processes, thus you have to have some kind of a database or a queue or a service you use (at the time they tried using one of the Pusher-like services, cause they didn't want to handle WebSocket connections themselves).

But even outside of specific examples like that - in my experience, I don't feel less productive in Rust when it comes to writing production-level applications, not necessarily critical or with wild performance needs. It's subjective, of course, but I agree with @pcwalton - if Rust was universally not productive, I don't believe so many companies would be using it.

One last thing to consider is the expressiveness of the language. In many languages, like Go, it's hard to make certain abstractions that are not a burden to use. Even after they introduced generics, most of the ecosystem is still using `interface {}` all over the place and projects like Kubernetes implement their own dynamic runtime type system. Recently I've been working on a load-testing tool running scenarios as WASM binaries called Crows[5] and one of the abstractions I've created is an RPC client that can send requests in both directions. At the code level, you use it like many RPC libraries in higher-level languages. You defined your interface [6] and then you can call it like it was a regular local method[7] which is huge when developing code, especially in an editor with LSP, cause it will show you what methods you can call and what arguments they take. What's more any typo would be caught at compile time as the server and the client share the same interface. In Go even official RPC client is like `client.Call("TimeServer.GiveServerTime", args, &reply)`, which can't be type checked as far as I know. I think the ability to create these kinds of APIs that are preventing you from doing the wrong thing is a huge advantage of the language.

  1. https://www.uber.com/en-DE/blog/data-race-patterns-in-go/
  2. https://github.blog/2021-03-08-github-security-update-a-bug-related-to-handling-of-authenticated-sessions/
  3. https://todobackend.com/
  4. https://github.com/drogus/todo-backend/blob/main/src/main.rs#L138-L151
  5. https://github.com/drogus/crows
  6. https://github.com/drogus/crows/blob/8eac9c9dfb3df3e5f329b5ba1ee85d37bceb6dc2/utils/src/services/mod.rs#L94-L105
  7. https://github.com/drogus/crows/blob/8eac9c9dfb3df3e5f329b5ba1ee85d37bceb6dc2/coordinator/src/main.rs#L80


Have you written much Rust?


Uhh, no, the games we got 15 years ago and before were definitely just as fun.


Hell no. Lots of these games take 5-7 years to make. You want to turn that into 10-14? I can live with the rare crash bugs.


What if it's 5-7, but only after there is a deep enough dev pool and language tooling to address some of the productivity issues mentioned in the blog? Why make up arbitrary x2 factors?


IDK, seems to me like studios did just fine putting release-quality games out at release 15-20 years ago shrug

"rare" LOL


No, the game doesn’t take twice as long. It just gets abandoned half-finished.

The world is full of half-finished games, it takes time and money to push to a finish.


Ah right that's why no games existed two decades ago.


It's a chicken-egg problem. You won't even see 10% of the bugs lurking in your game without releasing it to a wider audience, no matter how long you worked on it or how good your QA process is (that's what Steam's Early Access is for after all). YMMV depending on the complexity of the game of course.

But even if your game code is perfect and completely bug free, there are so many weird PC configs and buggy drivers in the wild that your game will crash for some users. And for the affected users it doesn't matter whether that crash is caused by crappy game code, or some crappy 3rd party software interfering with your game. For the user it's always the game's fault ;)


> You won't even see 10% of the bugs lurking in your game without releasing it to a wider audience, no matter how long you worked on it or how good your QA process is (that's what Steam's Early Access is for after all).

Just because they like to say that doesn't mean it's true. I've had access to see the list of known issues considered "critical" around release time for a few games. They know the bug exists, they just want to release it more than they want to fix it.

> But even if your game code is perfect and completely bug free, there are so many weird PC configs and buggy drivers in the wild that your game will crash for some users.

Which in no way invalidates the point that most modern games are absolutely unplayable for most users at release.

Oh yeah, and also that's why beta testing exists


perfect is the enemy of good. You never release anything thats perfect.

Perfect is impossible.


> "perfect"

> perfect

See the difference?


> I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long

I have recently completed Cyberpunk Phantom Liberty. The game crashed 4-5 times during 100-150 hours of gameplay. The crashes were pretty much painless because I quick save often.

The game was amazing.

The development of the game started in 2012, 12 years ago. I’m not sure you or most gamers would rather want a fully-finished "perfect" Cyberpunk 2077 game released in 2036.


> 4-5 times during 100-150 hours of gameplay

Great, thanks for proving my point! If you had played CP at release, how many times would it have crashed?

Do you really think it would have taken them another 12 years to get to the point they're at now if they hadn't released it 4 years ago? SMH


Photoshop does crash. Trust me if you do enough image editing you'll know it's not even a super rare event. They're generally doing a poor job handling the situations where you have no enough storage or RAM.

It didn't stop Adobe from being worth 200B.


Hard to know what TaleWorlds are actually optimising for because half the features of Bannerlord feel like they’ve never been played by a dev let alone iterated on.


How many of those crashes were caused by memory safety issues though?

A lot of those crashes might simply be called a "panic" in Rust.


And yet the fact that Bannerlord game logic is entirely in C# makes this possible:

https://github.com/int19h/Bannerlord.CSharp.Scripting

which in turn makes it a lot easier and more convenient to mod. Try that with Rust...


Yeah this is a common problem in the industry, we rarely have enough time to refactor what should be considered prototype-level code into robust code.


The game dev industry could form a consortium to launch its own dedicated general purpose language built from scratch to compile very fast like V or Go, run predictability, be much safer, be more reusable, and be extremely productive with the lessons learned from C, C++, C#, and more.

Also, I think LLMs will be able to run against code bases to suggest mass codemods to clean things up rather than having humans make a zillion changes or refactoring fragile areas of tech debt. LLMs are already being applied to generate test cases.


Jonathan Blow’s Jai is an attempt at something like this. It’s looking promising so far!


Interesting. I went through the primer spec. Appears to be a different kind of D or Go with some key points. Any new language should begin with a specific thesis of specific competitive advantages and problems it solves over existing customary and alternative tools. Kai appears to fulfill this property, so that's a good sign.


> It is still in development and as of yet is unavailable to the general public.

Is it still the case?


C# is that language (see Godot, Stride, FNA, Monogame).


Not really, it was adopted. It originated from Microsoft as their post-J++ Java alternative for CLR for the purposes of making it easier to write banking server software and Windows apps.


Does it matter what it was 20 years ago? It is the go-to language for gamedev today and only keeps getting better at it.


Both things can be true. I'm saying it wasn't designed to be as such. I don't what you're arguing about.


I believe that better tooling can help, yes. With refactoring, debugging, creating performance and style reports, updating documentation and a ton of other stuff.


This comment is nonsense


My impression is that this is due to their non-robust programming style. They do not add fallback behavior when e.g. receiving a null object. It would still be a bug, but could be a log entry instead of crash.


> My impression is that this is due to their non-robust programming style.

It's been 50+ years. I don't think that it's worthwhile just telling the programmer to do a better job.

> They do not add fallback behavior when e.g. receiving a null object. It would still be a bug, but could be a log entry instead of crash.

This is a pretty big feedback loop:

  * The programmer puts the null into the code
  * The code is released
  * The right conditions occur and the player triggers it
  * IF DONE SKILLFULLY AND CORRECTLY the game is able to recover from the null-dereference, write it out to a log, and get that log back to the developers.
  * The programmer takes the null out of the code.
If you don't do the first step, you don't get stuck doing the others either.


50+ years and people still fail to grasp this.

You have to put something (an optional, or a default constructed object in a useless state) and all you did was to skip the null check. In case of optional, you introduced a stack rewind or a panic. Everything else stayed the same. Maybe that default even deleted the hard drive instead of crashing.

Coding is hard. "just don't code" is not the answer. You can avoid something, that doesn't mean it won't show up in some other fashion.


Again, if you disallow unwrapping and panicking at the CI level, you actually force your developers to properly handle these situations.


> You have to put something (an optional, or a default constructed object in a useless state)

No, you really don't. There is no default number, no default string, no default piece of legislation, no default function.


Arbitrary recovery to null pointers isn't a good way to do robust programming. I recommend doing the exact opposite actually.

https://en.wikipedia.org/wiki/Crash-only_software

https://medium.com/@vamsimokari/erlang-let-it-crash-philosop...


A crash of an actor in BEAM is incomparable to a crash of a video game.


Is it? Is there no reasonable case where you have a subsystem in a game crash, then restart itself? Unless I'm mistaken, I've experienced this myself in video games more than once. Anything beats a full crash with a pointless error message.


I feel like a lot of people of HN think making a game is like making a web service or a GUI application. Yes, this behavior is used in video games sometimes, "restart itself" often means reloading a save file or something similar.


But if your video game uses a DSL for actors then you can do it in the DSL, which avoids special arbitrary bug-hiding behavior.


I dare you to board a plane whose software was written that way.


> Rust gamedev ecosystem lives on hype

I've been saying this for years. I've tried to get into Rust multiple times the past few years and one of the things I've tried was gamedev with Rust (specifically the library ggez when it was still being worked on, and a little bit of Bevy). I admittedly never got far, but I gave it a solid shot.

My experience was instantly terrible. Slow compile times and iterations, huge package downloads (my project folder was roughly 1gb for a simple 2D project), and of course Rust itself was difficult to get into with lifetimes and having to wrap and unwrap my variables constantly and getting into wrestling matches with the borrow checker.

I kept telling myself that everyone loves Rust and the community loves to rave about anything Rust-related and maybe I just don't get it, but it took some time to realize that no... It's just a terrible choice for it. I even tried to make UI with eGUI and was still miserable. Rust is a systems programming language but the community is trying to convince everyone should be used for general purpose stuff.

And my other biggest problem is that they keep painting other non-Rust things as being fundamentally flawed for not being Rust. "It's not memory safe" is the biggest one thrown around, but when was the last time memory safety was actually a big problem in games? Unity uses C# which is garbage collected, Godot uses its own scripting language which makes it nigh impossible to leak memory, Unreal AFAIK has its own tools that makes memory management trivial. Rust game development feels like a solution looking for a problem to fix.

I am curious about Bevy when it becomes mature and has its own editor, but for now I'm just not convinced gamedev with Rust will ever take off.


> And my other biggest problem is that they keep painting other non-Rust things as being fundamentally flawed for not being Rust. "It's not memory safe" is the biggest one thrown around, but when was the last time memory safety was actually a big problem in games? Unity uses C# which is garbage collected, Godot uses its own scripting language which makes it nigh impossible to leak memory, Unreal AFAIK has its own tools that makes memory management trivial. Rust game development feels like a solution looking for a problem to fix.

Memory safety may or may not be important in games, but the ability of engines like Bevy to analyze system dependencies and automatically scale to multiple CPUs is a big deal. Job queuing systems have been popular in gamedev for a very long time, and Rust's insistence on explicit declaration of mutability is a big part of the reason that "just works" in Bevy.


> but the ability of engines like Bevy to analyze system dependencies and automatically scale to multiple CPUs is a big deal

Is it? The article addresses that, and basically calls it a pointless feature that is almost never used and when it is the benefits are mostly lost because of real world needs and constraints, and that the problems it solves are easier solved through other solutions and add-on systems that are well understood.

I think this might be a case where explaining the real-world benefit instead of the theoretical benefit is needed, if only to counter what are very pointed criticisms that are definitely deeper than at the theoretical level.


Here's a trace of a Bevy demo: https://i.imgur.com/oXUxC2h.png

You can see that all the CPUs are being maxed out. This actually does result in significant FPS increases. Does it matter for every game? No. But it does result in better performance!


>> but the ability of engines like Bevy to analyze system dependencies and automatically scale to multiple CPUs is a big deal

>> Is it? The article addresses that, and basically calls it a pointless feature

> You can see that all the CPUs are being maxed out.

You're missing the forest for the trees - the poster above basically said "seeing all the CPUs being maxed out is a pointless feature" and you reply with "but see, all the CPUs are being maxed out".

You're literally ignoring the complaint and replying with marketing.


No, the original article said that you don't get parallelism from Bevy in practice:

> Unfortunately, after all the work that one has to put into ordering their systems it's not like there is going to be much left to parallelize. And in practice, what little one might gain from this will amount to parallelizing a purely data driven system that could've been done trivially with data parallelism using rayon.

It's not saying "yes, you get parallelism, but I don't need the performance"; it's claiming that in practice you don't get (system-level) parallelism at all. That's at odds with my experience.


The article is not saying that Bevy does not parallelize but that the impredictability of parallelism (both in ordering and in timing) forces the developer to add enough dependency constraints that there is not much left to parallelize.


The fact that 100% of the CPU is being used, and multiple systems are executing in parallel, shows otherwise.


Both I and the author agree with that, but it was not the point:

> the impredictability of parallelism (both in ordering and in timing) forces the developer to add [...] dependency constraints

The fact that it is possible to make a benchmark without hitting this problem does nothing to prove that bigger games can avoid it too.

Hitting 100% CPU means nothing unless those cores are doing things you actually want them to do.


To be fair, you've posted a toy example. Real games are often chains of dependent systems, and as complexity increases, clean threading opportunities decrease.

So, while yes it's nice in theory, in practice it often doesn't add as much performance as you'd expect.


The problem is that most of the gameplay code is linear, and people have already gotten good at splitting parallel work across threads. Serious physics engines (see jolt) are already designed to run on another thread and distribute the work across multiple cores. The main part of graphics drivers when using opengl or vulkan run on another thread and the UI you access just passes data to it. Rust's parallelism hasn't proven to be faster than C/C++, let alone less annoying to achieve.


Among those who have tried both, I can confidently say that the idea that C/C++ parallelism is as easy to achieve as parallelism in Rust is very much a minority view. There's a reason why nobody tried to parallelize CSS styling in a production browser before Stylo came along.


I'm talking about games specifically. I don't know much about the needs of web browsers.


I've parallelized emulators in C++ and work on parallel parts of Bevy now, which is probably the closest you're going to get to someone who has worked on parallelizing parts of large game engines in both C++ and Rust. It was far easier in Rust.


Ah, you mean specifically parallellizing a complex non-parallel system? Not designing a parallel system from scratch?


The context of this article, and my comment, is game development. Not game performance or engine optimization, which while related seem like related but smaller aspects of the overall topic.

The way I interpret the claims is that bevy is putting far too much focus on performance and multi-threading when by far the important thing to focus on for game development is allowing the actual game developers to rapidly iterate.

Bevy might be very fast and performant, but if that seems to have come at the cost of (or been optimized for over) features that make it easier to use in ways that game developers need, then the criticism may have merit. Whether that's true or not I don't know, but hopefully that explains why a response about how it definitely can use lots of threads and make good use of many cores isn't really seen as a good rebuttal to the criticisms leveled.


That makes no difference if the game is boring.


You could say that about any game engine. Are you suggesting Bevy should try and not optimise performance because perf and fun are not correlated?


I'm saying that making it easy to experiment with different gameplay mechanics is far more important than making it the most efficient. Even more so in case of small studios.


It's a good feature, but still a niche one. It's a bit like choosing Unity only because of DOTS. For a few projects perhaps it make sense. But just a few ones.


Nobody said that every game needs that level of performance. But saying that it's a solution looking for a problem is not true.

I'm fully in favor of having Bevy support dynamic languages, as implemented in for example bevy_mod_scripting [1], for projects that don't need that parallel performance.

[1]: https://github.com/makspll/bevy_mod_scripting


Good scripting support is probably the way to go for Rust game development anyway in order to achieve high iteration/idea testing velocity. We could have a script engine that memory-manages various in-game objects and the scripts call into Rust functions to do the heavy lifting. Those Rust functions will typically take things by reference from the script engine so that memory-management is mostly a non-issue.


> automatically scale to multiple CPUs

We've been promised automatic CPU scaling in programming languages since at least 2001, and I've yet to see any practical version of it.


I'm a Rust fan (mostly for embedded firmware with minimal deps), but even after 10 years of playing with the language it's not clear to me that advanced GUI or gamedev fits well with the borrow checker. It requires a significant paradigm shift in architecture, and I'm not convinced it's worth making that shift, especially if your application can tolerate a garbage collector (which many games and most UI apps can).


https://dioxuslabs.com/blog/release-050

Seems promising, very React-esque with little boilerplate


Development speed is many times lower than with Typescript frameworks, while the result is not faster or significantly more stable.

Why should anyone choose Dioxus over Sveltekit, Next or Nuxt? I never had an issue with a frontend app that the borrow checker would have catched. Error handling was an issues some years ago but is solved by now when using one of those modern frameworks. (I don't know if Dioxus has error boundaries, though.)

Those Rust fullstack frameworks make sense only for people wanting to use Rust, not for people looking for the right tool for the job.


Idk maybe when you can’t target web.


I hope Rust does gain mature options for its GUI ecosystem, but the author of the article makes a very good point that in other languages, there would be mature options in use already. "Seems promising" is too little, too late.


For sure! I would not write a game in Rust in 2024.


Many games are written in Rust already, but they are not AAA games. Example: https://bfnightly.bracketproductions.com/chapter_0.html


Agreed. Multiple languages exist. They can be part of {your, your team's} toolbox for different specific purposes. Some languages are set by other tools or by team members' backgrounds. Popularity also lends itself to greater availability of tools and Q&A forums. In the end, it's a better decision-making process to select what is most likely to be long-term productive for a specific project and team.


This might be controversial, but "Safety" and "Speed", in the same ecosystem, are not free. The cost is heavy syntax and heavy cognitive climbs. Why Rust was ever sold as a language for the masses is beyond me. A safe, fast, hard language is something you use for operating systems, aircraft, etc.

I adore Rust because it does all the things I remember being told to do in C, but without me remembering to do them: Error codes from all functions, Ownership models, etc. But those are not good reasons for me to use it for anything I wouldn't use C for.


It's possible to write unsafe and slow code in Rust.


> Rust game development feels like a solution looking for a problem to fix.

The same can be said for ordinary CRUD backends. Java, C#, Go and Typescript (Node, Deno or Bun) are all memory safe with good type systems and more than good enough performance. Evangelism around Rust is unfortunately still a thing. A good example is the latest hype in the community because some Google Manager said at a Rust conference that writing Rust is as fast as writing Go. Anyone having done more than a toy program in Rust and Go knows how wrong this statement is. The reasons are given in the article.


When single bug may cost millions of dollars, then Rust is cheaper and faster than Go. Google manager is not a liar.


This is not necessarily a bad thing. Especially given that Rust is an immediate upgrade with no downsides when moving away from C or C++. It is easy to see with people never wanting to go back, which also involves getting companies and products to adopt it as you would otherwise be forced by the market to work with inferior tools.

As a counterexample, .NET suffers a lot from the lack of evangelism - big chunk of community that started out back in .net framework days still thinks of it as poorly as people outside the ecosystem because they never bothered to drop old and obsolete tools and targets and give new versions a proper try (as the code is often vastly simplified and performance is vastly better).

Other programming languages, not only Rust, also do better at self promotion - take for example Go that managed to convince everyone to put it in the same bucket as Rust (which, personally, I find absolutely insulting as C# is a much closer alternative to Rust both in performance, features and access to low-level bits).


I mean, if we are allowed to lie in order to promote Rust, why don't we just smear all the C/C++ code bases in the world as security hazard needed to be sorted out ASAP?

Unless we already do...


I doubt security is the matter everyone is concerned with but rather the quality of tooling and developer experience.

It is, of course, difficult to convey to developers who only experienced C and C++ build systems, or Ruby tooling and brittleness, or Python way of managing dependencies, or setting up the packaging when using Java, that fast and easy to use solutions do not come from trade-offs but from just better ways of doing so - using cargo and Rust or dotnet and C# is night and day difference compared to options listed above.

I said it here in the past and will say it again: it's not that Rust (or .NET for that matter) are that good, it's a lot of other popular languages and platforms are that bad at one or another aspect (or many at the same time), that make it sufficiently painful to never tolerate a downgrade when you worked with a tool that offers better all-around experience.


I value good tooling as much as the next software engineer. We have good IDEs, build systems, package managers in Java and .NET lands; but we also have a decent environment of established, well-maintained libraries and frameworks.

Rust is deemed to have good tooling, but the third-party library ecosystem is following the NPM/RubyGems culture with all the fragmented dependencies, plus the added complexity of compile times due to lack of ABI compatibility.

Meanwhile, monolithic projects like Tokio also keep strengthening their reign among the small peasant crates.

I'm learning Rust, after decades of various languages with garbage collector, and I believe in the language itself and its tooling. But everything else about Rust irks me.


Just a small addition: Godot also has great C# support. It is a real charm to work with.


The godot-rust project crates take a minor amount of adaptation to understand how it exposes the Godot object system in Rust but it's also pretty well developed.


Nice to hear! Really want to get into Rust at some point. This might be a nice segue into it


Last time I tried Godot with C# in Visual studio, when I debugged I could not see the console output, and when I ran with the console output I could not debug (the breakpoints weren't hit). A Google search later and turns out it wasn't just me.


Godot C# works pretty seamlessly with VSCode and has improved dramatically over the years. It did regress a bit in Godot 4 after swapping to the newer .net "core" (in terms of platform support) but as of 4.2 I have had no issues at all.


How long ago was that? I only started with the most recent version of godot and it all works as expected.

However, I am also using Rider.


We're doing more and more of our back-end work with Rust. The main reason is the performance it provides. It's not just great for our end-users it's also so much cheaper in the modern world where we pay per mileage in the cloud. Part of what we really like about Rust, however, is actually exactly the variable ownership because it makes it very straight forward to enforce and control data-integrity and avoid race conditions. Even for programmers who would struggle to do so in C or C++.

I'm not sure whether or not that's even useful in game development. I've never done any form of game development beyond some Chess game I programmed in my first year of CS 30 years ago. But I'm actually really curious as to why you've struggled with variable ownership, because I'd frankly like to improve our on-boarding processes even more for new hires.

> my other biggest problem is that they keep painting other non-Rust things as being fundamentally flawed for not being Rust

Rust has a cult and it's best not to pay too much attention to it. Don't get me wrong, we're seeing great benefit in not just using Rust over C/C++ but also replacing more and more of our C# and Python services with it, but it's a very immature language and like any other programming language it's still just a tool. If it works for you, use it, if not... Well, use something that does.


> C/C++

is not something that exists in the real world.


Now I'm wondering how far people could go a hypothetical Rustscript* that transpiles to Rust (or hooks into rustc?), introduces extra features such as reflection, removes lifetimes, and changes the defaults around things like monomorphization.

* name intentionally made to make people angry


If you're removing lifetimes from the script, I'm not sure how you're then transpiling to Rust, unless you wrap everything with reference counting, at which point you're better off using a language with GC.


I would assume that it would opportunistically try to run the borrow checker, and if it fails on the access of a specific field, turn that access into an Arc/Rc everywhere, leaving any other access as references. This leaves you with invisible performance cliffs, where accessing a field in a new place suddenly increases the cost of accessing it everywhere else, but it does give you the "just do what I want, damn it!" development experience. I doubt Rust itself could do that without alienating its current userbase, but a RustScript could.


Thanks for the reply, it's an interesting idea.



Rust ain't Go but anything Go has can be used as an argument that Rust should try to do better in certain areas. ;)

Perhaps learn another language like Haskell, Swift, or Kotlin before Rust.

Get cargo-bloat, cargo-cache, and cargo-outdated.

Setup a memcache server and use sccache to accelerate Rust, C, and C++ compilations. It's not 100% but it's pretty awesome for things compiled at a stable build location.

Just like any platform, avoid dependencies wherever possible and use minimal crate features. Some Rust crates have an npm-like problem of dragging in zillions of dependencies.


> but when was the last time memory safety was actually a big problem in games? Unity uses C# which is garbage collected, Godot uses its own scripting language which makes it nigh impossible to leak memory, Unreal AFAIK has its own tools that makes memory management trivial.

So.... Sounds like memory safety is indeed a problem? Otherwise why do so many solutions exist for it?

Yeah, Rust definitely is not the only solution, or perhaps not even a good solution to this problem in the context of game development. But let's not pretend the problem itself doesn't exist?


> So.... Sounds like memory safety is indeed a problem? Otherwise why do so many solutions exist for it?

Memory safety and memory management are different things. Scripting languages remove the burden of manual memory management; as a side effect, they also tend to be memory safe, but that hasn't been the main motivation.


As I said in my own comment down thread, despite being a huge rust advocate, I sincerely agree with you here.

Rust is not a good language for actually writing games, and the fact that it is being sold as such is really detrimental to it in my opinion, because it is holding the ecosystem back. Rust is being pushed as a language for game logic, so people try out and realize it isn't very good at that, and so they just give up on Rust in the game development industry at all and leave, understandably! If Rust were more strategically positioned, it could get a lot farther. Where it should be focusing in the games industry is on game engines, where flexibility and quick iteration and easy prototyping and being able to just reach out and directly touch and control things isn't as important, but where concerns like the clarity and maintainability of the code base, stability of the software, resource ownership and management, and eeking out every ounce of performance all become important, and so the type system and static analysis guarantees of Rust are actually useful.

This is where, I'm disappointed to say, I think things like Bevy and Amethyst have severely hurt the Rust game development ecosystem. They aren't really game engines in the traditional sense, they are more like game frameworks like Love2D except written in Rust: they force you to statically link your game code to the engine code, and write your game logic in the same language your engine is written in. This means that game developers who just want to quickly prototype game mechanics and want to be able to iterate on them in order to refine them are forced to use a language that is far too focused on correctness, safety, static verifiability, and concerns like that to actually be usable as a programming language, and worse, it forces them to compile their game logic and the entire engine together and link them together in order to build their actual game and test it, massively increasing the weight of the process and basically ruling out hot reloading or making your game independent of any specific version of the engine, or its license. It puts them between a rock and a hard place, between using some other ecosystem, or using a language that simply unsuitable for a game development.

I think the far better solution (one which I plan to very slowly feel out with my embryo engine project, which is born out of my frustration of looking at the existing rust game engines and feeling like they are all kind of lying about what they are) would be to stop with the vaporware and the hype with Bevy and Amethyst and such, and actually build a proper game engine, like they are promising to be but are not, that is its own separate pre-compiled executable that game developers don't even need to mess with at all, that picks up game assets and game code written in a more flexible, dynamic, language that's better for prototyping, and runs them, something like what Unity or Godot or even Gamebryo do. Only then will the rust game development ecosystem take off, because it will no longer be forcing a language that just isn't good for that on to people.


But people want to write Rust and a game seems like a fun way to do it. They can already use Godot or Unity with this approach.


Garbage collection causes performance issues.


I've done hobby gamedev in Bevy/Rust, Godot/C#, and Unity C#.

It's honestly somewhat baffling to me that folks will choose Rust for gamedev right now. The state of the open sourced tools are just not there yet, especially when compared to Godot, and at the same time these games are running on PC hardware which tends to get faster every year.

Also for ECS... one thing I tended to realize is that when developing a game, pigeonholing everything into an ECS can seriously tend to get in the way. A lot of (efficiently written) game code is best handled in an infrequent event-driven way.

An ECS backed game engine like Bevy can make big mobs more efficient, but few games will actually leverage this effectively for fun gameplay and at the same time modern PCs are fast as hell.

I think about Starcraft from 1998, created when virtually all PCs only had one core, and its 200 unit per faction cap. Blizzard hasn't increased this cap because it doesn't necessarily make the game more fun. Now should a gamedev today, 26 years later, making a 2d isometric game for the PC be worried about performant multithreading????


> I think about Starcraft from 1998, created when virtually all PCs only had one core, and its 200 unit per faction cap. Blizzard hasn't increased this cap because it doesn't necessarily make the game more fun.

Ah... Starcraft. It's 200 supply per player (hero units take 0 supply, zerglings are 0.5, and the supply cost goes up to 8 for battlecruisers for example). The limit is enforced when building a unit from a building. Map triggers can grant units and you can exceed the 200 supply limit.

The technical unit limit for the map was 1700, and was later in fact extended to 3400 by Blizzard. The EUD emulator (part of the official SC Remastered) allows for online custom games to be played without any third party tools on the player's part. Certain limits like sprites can be bypassed with this tool (for map makers) https://github.com/phu54321/euddraft/blob/master/plugins/unl...

EUD started out as a buffer overflow exploit which allowed custom maps to patch the game client's code. It was later fixed by blizzard but re-implemented as an emulator (with some restrictions).

These are definitely things that enhance gameplay for custom scenarios. https://youtu.be/HEv_U9WV4PA?t=1541 (yes, that is a battlecruiser shooting nukes)


Complete distraction of a question -- for that video clip, what is the song playing at the time stamp you selected? Is that in-game music? I figure not.


Don't know the song but that is the ingame music for that specific map, and gets triggered each time certain enemy buildings are destroyed. Different music for each type of building


Very helpful, ty vm.


Likewise - I've been learning Rust for four years now (significant C/C++/Python/Lua experience), and have written some reasonably complex apps in it, but I really just didn't get the Bevy / ECS "hype"...

I've tried to write several different types of games using it (with Bevy) in the past three years, and it just feels like shoe-horning something in.

But the biggest complaint I have with Bevy is that with all the refactoring that's been needed with the Bevy version upgrades: getting the code to compile again after the version upgrades has normally been fairly easy - but it then often didn't work correctly, and I'd have to spend time debugging the ECS system to work out what was wrong.

i.e. the "if it compiles, it'll almost certainly work" bonus of generic Rust code totally seems to fall down within Bevy.

I obviously understand that it's an in-development framework, in its early days, so some of that's on me for choosing it, but still, it's been a very painful experience, and I feel I've wasted a fairly significant amount of time over the past few years attempting it.


CPUs are way way faster, but RAM latency has barely improved in the past couple decades. That's why cache-optimized systems like ECS can still be a dramatic improvement when you're simulating a lot of stuff. Like, thousands of active objects.


They can be improvements, but you can do Data-Oriented Programming without ECS systems, i.e. Structure Of Arrays, which is what we often using in Rendering/Simulation for VFX for SIMD/GPU compute...

But similarly, ECSs can be slower, if they don't have some optimisations, i.e. spatial data structure lookups: just using a generic ECS "database" system without any first-class spatial knowledge / acceleration structure lookup ability, is likely going to be slower.


> I think about Starcraft from 1998, created when virtually all PCs only had one core, and its 200 unit per faction cap. Blizzard hasn't increased this cap because it doesn't necessarily make the game more fun.

That small scale was exactly why the game ends up being so much about micro, and while that may make it more competitive or interesting for spectators it makes it a lot less fun to play IMO. Total Annihilation and successors were a lot more fun, and a big part of that was not having arbitrary unit caps in a way that affected gameplay; expanding the limit from 500 to 1500 did genuinely make the game more fun.


Agreed. Truly exponential economy makes late-game Total Annihilation/Supreme Commander/Beyond All Reason far more fun than Starcraft.

Starcraft late-game is hoarding resources and running out your opponent's patience while looking for a slip up.


supreme commanders very high unit cap leads to crazy end games. it allows your economy to grow exponentially for the entire game. which has a big impact on late game strategy


Many people are choosing Rust because they want to use Rust. Rust has the reputation being "the best" programming language: Fast, safe, reliable, modern. I can rely on that very much. Who wouldn't like the feeling of using the best tech for their project.

What many overlook is that using Rust has very high costs, but the edge over alternative languages is often only marginally - depending on the use case of course.

Those costs of Rust get in the way of developing the actual product. You loose speed, efficiency, but potentially gain no benefit to the users of your product.


>but few games will actually leverage this effectively for fun gameplay

In my opinion this is a result of big mobs having poor performance. When players get to choose they seem to like having more mobs thrown at them.

This can also be limiting for interactable objects.


Allan Blomquist's tooling demo they mention is incredible, go watch it:

https://www.youtube.com/watch?v=72y2EC5fkcE

Really sells the value of having a tight developer feedback loop: it shows hot reloading for code and graphics, a reversible debugger, live profiling with flame graphs, a data inspector with data breakpoints, time travel inspection with a scrub bar, session sharing and replay with the same scrub bar and direct links from the call stack to a breakpoint, and more.

Above the many niggles they had with Rust itself, this greatly helps me understand why Rust left them wanting more from their working environment. They say they've switched back to Unity with https://hotreload.net/ to try to capture some of that, and now I see why. (It's a shame that hot reloading tooling in Rust wasn't ready for them yet, but I see why they've moved on instead of waiting/contributing.)


Yeah, this is the bit that stood out for me, too.

Does anyone knowledgeable here have a sense for whether there are any insurmountable roadblocks to bringing hot reload to Godot?


Godot 4 already has hot reloading.


Not for shaders while the game is running? https://github.com/godotengine/godot-proposals/issues/5269


For testing shaders you can use the editor, at least.


Hmm so it reloads godot-rust/gdext while the Godot environment is running (recentish new feature), but can it also reload while the game itself is running?

I should really give it a shot..


My impression of Rust is that it's a very opinionated language that wants everybody to program in a specific way that emphasizes memory safety above everything. That's a good idea, I think, for the systems programming use cases that it was intended for. I don't see that as a particularly useful thing to value for game development. The part in the article about the Rust borrow checker constantly forcing refactors sounds extremely obnoxious to deal with.

I'd think that an ideal game dev language would be programmer time efficient, reasonably performant and designed for skilled programmers who can handle a language filled with footguns. Basically a better version of C such as a selective subset of C++ or a Golang without garbage collection. I just don't think the kinds of security bugs you get from C/C++ "unsafe" code are that big of a deal for games but they would be for a web site or an enterprise database.


> I just don't think the kinds of security bugs you get from C/C++ "unsafe" code are that big of a deal for games but they would be for a web site or an enterprise database.

Even for database engines specifically, modern C++ is essentially as safe as Rust and significantly more ergonomic. Rust's safety features can't reason about the case when all of your runtime objects live in explicitly paged memory with indefinite lifetimes and no fixed memory address, which is the norm in database kernels. You have to write the same code to make handling these objects safe and correct in Rust that you have to write in C++. You can't use normal pointers and allocators for this even if you wanted to.

Rust's safety is designed more for normal dynamic memory applications.


> modern C++ is essentially as safe as Rust

This isn't even close to being true. I think memory safety isn't as important for games as it is for most software (though it is still quite important for multiplayer games!). But even if you write the most modern C++ possible I guarantee you are going to spend some of your time debugging segfaults, memory corruption and heisenbugs. Don't try and claim "I don't write bugs". Everyone does.


That assertion was specifically qualified in the context of database engines, for which it is true. I definitely write bugs but I haven't seen a segfault or memory corruption in years. That is more of a C thing than a C++ thing.

It is kind of difficult to have a segfault or memory corruption with explicitly paged object memory, since there can't be any pointers and these complex objects are bound-checked at compile-time. If you care about performance and scalability, you don't need to concern yourself with multi-threading as an issue either. The main way you'd expect to see memory corruption is if you try to read/write a page in the middle of a DMA operation to the same memory, and Rust doesn't help you with that either (though this would be just a normal logic bug in the scheduler).

It is pretty easy to avoid segfaults and memory corruption in modern C++ if the software architecture doesn't allow you to create the conditions under which those are likely to occur.


So you're saying if you write your database engine in C++ you're not going to see any segfaults?

https://jira.mariadb.org/browse/MDEV-14248?jql=text%20~%20%2...


That is significantly dependent on the software architecture. MariaDB's design is not particularly modern (not a knock against MariaDB, it is an older system) and employs none of the software architecture required for high-scale and high-performance kernels that, as a side-effect, makes it difficult to accidentally create the conditions for a segfault regardless of the language. The design motivation is actually optimal dynamic resource scheduling under heavy unpredictable workloads, not memory safety. Rust's borrow checker doesn't work with these memory models, so you'll be in the same boat as C++ regardless.

I always found it theoretically interesting that schedule-based safety architectures, which are focused more on optimal resource allocation than safety per se (its all about extreme throughput traditionally), asymptotically converge on memory safety too as a practical matter for the same reason they also require almost no locking. By doing the safety analysis (many kinds, not just memory) at runtime, tiny dynamic modifications to the execution schedule are sufficient to provably (using TLA+ and similar) avoid many types of "unsafety" without the design compromises required to enable some of this analysis at compile-time. It requires a non-traditional software architecture, and it doesn't play nicely with a lot of existing code, due to the level of execution control required but I see more and more systems being designed this way at the high-end of the data infrastructure market.


Sounds interesting. What reading on such modern architectures would you recommend?


No, they're saying that you will still see segfaults if you write it in Rust, because Rust's borrow checker is unusable in that environment.


That doesn't seem to be at all what they're saying, but in any case I checked SurrealDB (biggest Rust DB I could find) and there was exactly one report of a segfault and the developers couldn't reproduce it.

As far as I can tell about 5% of mariadb bugs mention segfaults, compared to 0.2% for SurrealDB.

I mean it's fairly obvious that even if some code in a Rust database is `unsafe` because it deals with manual paging and DMA and whatever, most of the code is going to be safe code.


This kind of extends to embedded/low level systems programming as well - the assumption that memory can only change as an effect of program execution just does not hold true there. What's the value of tracking mutability and data ownership when a DMA engine can just decide to overwrite the memory you supposedly have exclusive access to?


  > I'd think that an ideal game dev language would be programmer time efficient, reasonably performant and designed for skilled programmers who can handle a language filled with footguns. Basically a better version of C such as a selective subset of C++ or a Golang without garbage collection.
I agree so much that I've been working on this for a whole year.

There is a sweet spot : non-GC, with pointers (but bounded), inference, basic OOP + tacking, and all the comforts of scripts. All in a good looking syntax without semi-colons.

So you can program fast and get a fast program.


For me, this is Odin-Lang, it doesn't meet all the requirements you have listed, but it's ergonomic, fast, and comes with extensive core and vendor libraries. It's all just fun and reasonable.

https://odin-lang.org/


Oh, that's quite on the mark!

Nitpicking: I'm not fond of reserving keywords like len or append.

  len(arr)
  append(arr, v)
Better is

  arr.len
  arr.append(v)
Also

  x: [dynamic]int
is quite verbose

Maybe better would be

  x: [int]   //dyn
  x: [int,2] //fixed


Seeing presence/absence of semicolons in the list of primary features makes me wary.

And it takes a lot of people to make good tooling.


I'm 100% fed up typing those damned semis all the time.. That's the very initial reason I embarked on a dialect of C. (That and strings)

They're mostly useless and a visual annoyance.


IMO it’s not opinionated enough. Golang for example doesn’t let you customize go fmt, while Rust does. Rust also has many ways to do things in general (mod.rs vs name_of_folder.rs for example) and seems to not want to provide a useful baseline for most projects via its standard library (unlike Golang).

But to go back to our subject: Rust is a great language and that’s all you need. I wish I could use it with unity.


The mod vs folder rs stuff is just embarrassing for Rust. I have no idea why they support more then 1 method.


The do it because the first one sucked ass and the second is a huge improvement but you still have to support the old way.


Which is which and why?


mod.rs came first and that made creating modules verbose as hell. Not only that but imagine having many mod.rs files open, you wouldn’t know what module you are in by just looking at the filename.


imagine having many lib.rs open?


mod.rs is better, change my mind


Sure, open 5 tabs named mod.rs, so which mod are you in?


Open 5 libs.rs, which crate are you in? Always look at full path instead of filename. A mod.rs means a much more cleaner organization, as you know that all of a folder module's content is within that module, and you don't have to look elsewhere.


Yes this would be a problem if for every module you had to create a lib.rs, which is not the case. mod.rs has a much higher # of occurrences.


I’ve worked in plenty of codebases that were split in many crates, not many folder modules


I speculate that they wanted to support just `mod.rs`, but because VS Code became the editor for rust people hated seeing all tabs named `mod.rs`.


Golang is indeed much more opinionated. To quote @bcantril, "Go is like steampunk for programming."


Most modern languages are memory safe and they don't get called out for emphasizing that or being opinionated. I think with Rust that attention results from its choice of memory management model which gets in the way a lot in ways described in the article.


> My impression of Rust is that it's a very opinionated language

It's not though. There's only one thing Rust is opinionated about.

> that wants everybody to program in a specific way that emphasizes memory safety above everything.

Well yes, that is literally the core proposition and purpose of the language. That's like saying java is opinionated because it wants to manage the memory.

> I just don't think the kinds of security bugs you get from C/C++ "unsafe" code are that big of a deal for games

As soon as games are networked it starts being a problem, and these days non-networked games are pretty rare.


Rust also is opinionated that you don't want to write shared libraries or plugins. You can do both, but only if you drop down to memory unsafe C interfaces. The default is statically compile all applications into one program. Rust also really wants you do to use their build system and package manager, you can avoid both but everything will fight you.


> Rust also is opinionated that you don't want to write shared libraries or plugins.

Not having a solution is not the same as having an opinion.

If you have years to spend on plugging at ABI stabilisation, generics and proc macros in dynamic linking, and redistributable std, I’m sure the core devs would be happy for you to.

> Rust also really wants you do to use their build system and package manager, you can avoid both but everything will fight you.

What do you mean everything will fight you? It sounds like you’re confusing rust and its ecosystem.


A language is in large part the ecosystem. If I can't use the expected ecosystem I can't search for answers.


> As soon as games are networked it starts being a problem, and these days non-networked games are pretty rare.

So have your network protocol parser in Rust and the entire rest of your game in whatever the hell you want.

All the practical safety you could desire without any language constraints for the other 99% of the code


You more or less described Zig


Sort of, although Zig certainly pushes itself towards the embedded world. I have tried Zig a bit and like it a lot, and I am sure it would be better for game dev than Rust, but I don't want to pass allocators around all day to all the objects in my game.

Go without GC is more like a Go and Zig baby.


nothing really prevents you from defining global allocator in Zig

and having explicit allocator in standard library is actually a good thing, cause it's quite a common case in game development to use arena allocators which are being freed once per frame - so you don't really need to reinvent your data structures in Zig

I do have some concerns about Zig because it also introduces some friction for correctness sake like requiring to always use all variables, explicit casts everywhere - I want some compiler toggle to disable all of that and focus on problem but unfortunately it's not there

I am playing with Zig now and haven't really formed my opinion about game development specifically but I like it a lot so far


To be fair though that sort of friction only affects things in the small. They can be annoying but you'll never have to refactor outside the scope of the friction itself.

In practice, it's only really a problem if you're doing codegen.


Zig is flexible. If you don't like global allocators or explicitly passing allocators, you can store pointer to the allocator in your object and it will be passed implicitly.


Odin is almost precisely that, and has many useful gamedev features and bindings.


> I'd think that an ideal game dev language would be programmer time efficient, reasonably performant and designed for skilled programmers who can handle a language filled with footguns

Sounds like Common Lisp or OCaml would work well. With Ocaml I find myself being able to iterate extremely quickly because of the inferred types and extremely fast compilation times. You also have the ability to tweak the GC to your needs and the assembly is easy to read.

Lisp is well… built for interactive development


This is a sobering read. Thank you for sharing.

This sums it up for me:

> Rust as both language and community is so preoccupied with avoiding problems at all cost that it completely loses sight of what matters, delivering an experience that is so good that whatever problems are there aren't really important. This doesn't mean "ship crap games", it means focusing on the game being a good game, not on the code being good code.

I think this can be easily extrapolated to projects outside of game development as well.

User experience is ultimately all that matters. If you're in prototyping stages of whatever it is you're building, and games spend a lot of time in this phase, then your focus should always be on testing what the user experience will be like, rather than absolute code correctness, maintainability, and everything else that makes a long-term project successful.

The fact Rust seemingly can't deliver this rapid prototyping workflow should be a large factor when deciding which language to use.

I've been using Go as my main language for the better part of a decade now, and I think it strikes the perfect balance of code quality and rapid prototyping. It's far from the side of absolute freedom of a language like Python, which becomes a nightmare to work with after the prototyping phase is over (though this might have improved in the past few years), but it's also far from languages like Rust, and allows me to be very productive, very quickly, while also being easy to pick up for newcomers. I probably wouldn't pick it for GUI or game development either, though, but for things like CLI, network and web tooling, it's perfect.


To be fair, many (non-game dev) Rust projects I have seen/used do provide great user experience precisely because they are laser-focused on performance and have blown existing alternatives out of the water. (Think ripgrep, fzf, etc.)

Prototyping is certainly necessary but it shouldn't be at the cost of runtime performance – at least not too much –, because it will typically be very difficult to improve performance after the fact, which web development frameworks, and in particular shitty "web" applications like MS Teams are a testament to.

As always, it's about balance.


> ripgrep, fzf

I think these are great examples of where prototyping and rapid iteration are really not needed at all, and hence Rust shines here.

Writing a game is completely different.


The initial version of ripgrep was absolutely a rapid prototype.

I do rapid prototyping all the time.

I'm not saying Rust is good for game dev, but the idea that Rust cannot be used for rapid prototyping in any context is a myth.


Indeed, I personally find Rust to be very nice for rapid prototyping, incremental recompilation is usually a second or two even in my giant projects (mold helps on the linking step but that's less of a rust thing anyway), and I'm very curious how cranelift will change things in the future, it would be nice to hot swap function implementations on the fly at least.


Are there any particular techniques or styles that stand out to you as useful when prototyping in Rust?


`clone()` and `unwrap()` and `todo!()` without fear. Just let it loose.

For me, prototyping is, IMO, about finding shortcuts to demonstrate something that is unknown to you. The idea is that shortcuts represent things you know how to do, but would otherwise take work to avoid and aren't necessary for demonstrating the thing that is unknown. `clone()` and `unwrap()` are just Rust-specific examples of that.


Fzf is written in go fwiw


Oops, looks like I misremembered. :) Thanks!


Ah, now I know what I confused it with: fd[0]. That one is written in Rust.

[0]: https://github.com/sharkdp/fd


> User experience is ultimately all that matters.

It should be but current state of web will show you that it is often not.


Really? Why web?


Ads, popups, horrible laggy UI all to make money. It is not user experience, just money


Starting by saying I fundamentally agree wrt iteration speed. This is ultimately why [C/C++]/Lua was such a thing for a while, and it seems quite plausible that you could benefit from a core engine in rust bound to a scripting language.

But ultimately I sense the subtext here is much the same as with other Rust problems: the object oriented baby has been thrown out with the bathwater, often in the name of premature optimisation, but also with a sense of misplaced religious purity regarding the evils of state and the merits of functional programming. There never was any OOP law that your inheritance hierarchy had to be insane, or that you had to create classes for absolutely every last thing. Now we have people hitting the opposite extreme where everything has to go through the same function switched on a pattern matched enum. One of the core problems with Rust is it lacks the mechanisms to allow moving adequately out of this tarpit.

I still think Rust might have a place at the lowest level core where it is all about shuffling arrays of things through compute units, but for the higher level pieces it is clearly the wrong thing to be using.


> but also with a sense of misplaced religious purity regarding the evils of state

To clarify, Rust isn't against state at all. Rust bends over backwards to make mutation possible, when it would have been far easier (and slower, and less usable) to have a fully-immutable language. What Rust is against is global mutable state, and an aversion to global mutable state isn't a religious position, it's a pragmatic position, because global mutable state makes concurrency (and reasoning about your code in general) completely intractable.


1000%, and this is something gamedevs (and, for quite some time, webdevs) are guilty of in the name of speed for quite some time. In both web and game dev, it's come back to bite when it's time to debug.

Concurrency is hard, and anything with a ton of user interaction or communication across multiple parties induces concurrency (never block the main thread and all).


YMMV, but I find C# and TypeScript to have a "Goldilocks" mix of OOP and FP that you can take advantage of the strengths of each where it makes sense.


You’re absolutely right, they are incredibly pragmatic.


I've been playing around with an idea about OOP for awhile, not sure if it'll ring true or not but I'll run it up the flagpole for feedback.

I think FP is a great way to program actions and agency but OOP is a great way to model the world. I like Rust's trait system because the polymorphism is based on what you want an object to do not what it is. But when you're creating models of the world it's usually really convenient and even accurate to use nested inheritance models. Maybe the original system for this is the flora/fauna taxonomy but it applies to a lot of things; like GUI elements or game models.

If this is correct, it might explain why the discourse is so polarized. Whether OOP is a blessing or a curse probably depends on whether you're using a programming language as a modelling language or as a logic/execution language.


It's not hard/fast but I would tend to agree with an approximation of this.

Back when I worked properly on big games the UI libs would often be trees of widgets with injectable functions for modifying the rendering, which is actually one of the points in this blog the writer would like. (The UI lib of classic Sims was exactly like that). These days the stuff I've done, although entirely in JS, at https://luduxia.com/ follows that pattern for the 3D components at least. The world is defined in an almost classic scene graph and then behaviour is added by attaching functions to pieces, which can be composed functionally.

Much of the anti-OOP noise is the result of people that have suffered from others creating hierarchies of the world too literally. Quite why it proves so difficult for developers to slow down and think about the right course of action is beyond me. They're also staggeringly resistant to changing afterwards.


>Much of the anti-OOP noise is the result of people that have suffered from others creating hierarchies of the world too literally.

I think one of the problems is that a heirarchy is inherently opinionated. You have to choose the criteria around how to group the objects/nodes in your graph and that criteria is context-dependent. The example I've used with animal taxonomy is grouping your objects via things you can eat vs things you can pat. Thsoe are two very different graph structures of the same group of objects and if you started with one then realized you need to change, how you use your objects you're gonna have a bad time. Multiple inheritance is the bandaid solution that usually comes with more hassle than it's worth.

Building a UI is a good example of a heirarchical structure where you know exactly how you want to use your objects and how they'll relate to each other, Not having access to that programming structure would be frustrating and just feel like a loss. But I've also done multiple large refactors of Python projects because I relied on OO inheritance models that turned out to not be quite the right implementation. In those situations, Rust traits are a breath of fresh air for offering the right kind of polymorphism.


I shall recommend this talk for you https://www.youtube.com/watch?v=rX0ItVEVjHc


Funny how at the end of the talk a senior looking guy asks him, “Why not just use C?” and the speaker basically admits that that would be his choice but for “cultural” reasons.


And then he went to work for Unity on HPC#, the C# Burst compiler toolchain.


… and monetary reasons :)


I love a long, deep-dive, nerdy talk like this! I'll check it out, thanks.


> OOP is a great way to model the world

There's a reason Simula was the first OOP language.

https://en.m.wikipedia.org/wiki/Simula


Absolutely agree. I think people saw that a lot of games are written in C++ and got confused into thinking that the right thing to do is to build your entire game in a systems language. The fact that we have games written entirely in C++ is mostly just due to the enormous amount of inertia that game engines have, and the fact that many of them have origins that go back decades to a time when the programming language landscape and broader development ecosystem were completely different.

And now, the most popular generally available game engines right now are: Unity - C++ in the engine, C# for game code Godot - C++ in the engine code, GDScript or C# for game code Unreal - C++ in the engine code, somewhat mangled C++ for game code BUT with also one of the most capable and widely used visual programming setups I have ever seen

I wouldn't be surprised if the "next great game engine" had a Rust core and some other language- I mean why not C# at this point?- for game code.


> I wouldn't be surprised if the "next great game engine" had a Rust core and some other language- I mean why not C# at this point?- for game code.

That's why, the first time I saw Bevy and Amathyst, I had an immediate "they're doing it wront$ reaction. IMHO, to be a true game engine in the modern sense, instead of merely a game framework, your engine needs to be a precompiled, standalone executable in a systems language that picks up, loads, and executes game scripts, data files, and assets that are totally separate from the engine itself and written in higher level languages. You don't want to be writing everything in a rigid systems language and especially don't want to have to compile your game logic and your engine together and then link them as if the engine were a library. That's why I'm (very slowly) feeling out what a proper game engine in Rust might be like with https://github.com/alexispurslane/embryo-engine. It will take me some time, since I have to learn real time computer graphics and a lot about game engines, but I've got the books, I've made a lot of progress in learning them, and the engine design and architecture is coming together very well in my big black notebook, so if you're interested, give the repo a watch ;)

I'm debating between embeddable common lisp (to satisfy my hacker impulses) or C# for the scripting language. I have figured out, I think, how I'll embed C# in Rust down to some pretty detailed steps, involving a two stage process, but .NET is so heavy I'm worried if it'll be worth it. I'd love input!


Yeah, that bifurcation makes practical sense. Have you considered WebAssembly for the game script layer?


Someone's brought that up to me before — I should probably consider it more seriously, because it'd be very cool


Isn’t the trait system exactly the way out of this tar pit?


Obviously that is the intention, but the absence of libraries that manage to replicate what people manage fairly easily in other paradigms does show it's not sufficient.

golang is similar in this regard - it has interfaces and you can compose type structs, but the results become an unwieldy mess unless the developers are staggeringly disciplined, in which case they'll have a better time in something else anyway.


This makes a lot of sense. Wonder if Bevy will add support for GDScript or C# or something. I think it's generally opposed in the interest of ensuring the Rust devex is as good as possible, but it's coming eventually, I think.


A general bevy scripting crate has already been in the works, they were waiting on various things like be y reflect and other such features to be able to work properly and so forth. In other words it's already planned and there's a lot of work being done on it with a whole lot of dependent functionality coming out in every single release.


I only skimmed this article, but, despite it being very negative about Rust, I almost 100% agree with it: Rust is a HORRIBLE choice for game dev. I might quibble with how they outline the costs and benefits of some of the design patterns that rust forces you into, for instance I think command lists are actually incredibly useful and perfectly fine as a game development thing and not the huge problem they consider them to be, and generational arenas basically solve any pointer ownership problems in game development in my opinion, but they are right in the main.

I know, because I've tried it. Once. I would #never* recommend Rust to game developers, especially not indie ones. In fact I'd recommend against it strenuously!

And this is precisely because Rust is explicitly and knowingly focused on correctness, safety, perfectly clean code, etc at the cost of iteration speed and flexibility and dynamicism, and that's bad for designing game mechanics and even just getting a game done — games have an inherently short life span and development cycle, so safety and correctness and code quality don't matter a whole lot. It's okay if they crash, etc, as long as they work enough to play. It's okay if the code is ugly, you probably won't be working on it for very long. This is even moreso the case, as the author says, because in writing a game you really want to be able to iterate quickly and just. Do shit as an experiment, even if it's temporary, to see how it feels.

On the other hand, who I would recommend Rust to is the people writing game engines, where you really will probably be working on that code for years to come, where stability and correctness is pretty important, and so where Rust's strengths will really shine — but crucially, even then, I'd tell them to make it a real engine, not a game framework like Bevy, by adding a highly flexible, dynamic scripting language like Lua or even C#, and a data format for specifying scenes and entities, and an editor. That way you don't write your game in Rust at all!


Quick note: I'm actually very slowly prototyping something like this here: https://github.com/alexispurslane/embryo-engine/

I'm disabled so I don't have a lot of energy to work on it often, but, especially once I nail down the last few design issues, I'd really love help, or even just a few eyes on the project to encourage me ;)


I like it, I read the Design Document. Do you have any game concepts you are building with it? It seems like the kind of project that would be built with a game side-by-side.


I do in fact have a game idea I sort of have in mind while designing the engine, yes, but I don't have the bandwidth to do both at once unfortunately.


> Making a fun & interesting games is about rapid prototyping and iteration, Rust's values are everything but that

I found this to be true of C after many, many years coding in C. I noticed that the first selection of data layout stayed throughout the life of the code (with a lot of tweaks, additions, etc.). But didn't really think that much about it.

Until I started writing code in D. It was easy to change the data layout, and I did for experimenting. For example, changing a reference type to a value type, and vice versa. This was easy in D. It's just too much work in C, so it didn't happen.

The reason is simple:

    p->b
    v.b
To switch between a ref and a value type, you've got to search/replace the -> into ., and the . into ->, and not disturb the dots and arrows of the other types. When dealing with 100,000 lines of code, this is a non-starter.

But with D, both reference and value types are used as:

    p.b
    v.b
making it easy to switch between the two, and also switching function parameters from values back and forth with references.


I agree, and D with the GC lets me prototype quickly. Its type system gets progressively stricter with constraints, for code that survives. I wouldn't want all the type system to apply to prototype code or a nascent program.


I tend to add in the constraints only after the code works. Two different parts of my brain.


The power of attributes and get/set in languages like C# is similar. It's easy to turn value types into functions. Or to move a value into a subclass and you don't need to do a.b.c (moved c into b) because you can add c => b.c to the a class.



That's a good example. I actually think dot syntax is really under-utilized sometimes. Although personally, I'd prefer that if v was a reference/pointer to a struct, that v.b simply performs a pointer offset, instead of auto-dereferencing like ->.


In the same idea in Eiffel in a.b, b can be either a member or a function to allow easy replacement.

On the opposite other languages want to have no hidden function call, no hidden pointer dereferencing..


When ever I watch someone coding C(++) all they do is compile and then add remove * and & or change between -> and . when the compiler complains.

Multiply that by all the C(++) coders on the planet and we have lost a billion man hours...


I've worked on Ambient Engine and now on the Bevy engine. I totally agree with these points, very valuable.

I only make some comments from my professional (audio) perspective:

We need the highlight author's affirmation of cli. Rust's tui (ratatui) is great. I used it to make Glicol-cli [1]. If you are a Linux user, you are welcome to test the music production of the code.

Speaking of game audio, I actually think rust is perfect for audio. I have also continued to develop Glicol [2] recently, and my recent goal (starting tomorrow) is the bevy_glicol plug-in. I want to solve bevy's audio problem on the browser.

All in all, even though I've had my share of pain with ecs, I still think rust is very valuable for game and app development, maybe not multiplayer AAA, maybe practical apps.

[1] https://github.com/glicol/glicol-cli

[2] https://github.com/chaosprint/glicol


This looks very cool. Are there more videos of glicol being used live?



I think the crux is this heading: Making a fun & interesting games is about rapid prototyping and iteration, Rust's values are everything but that

Jon Blow said that in one of his talks: Rust treats all code as production code. For most of the duration of a project, that's counterproductive, because it introduces a significant amount of unnecessary friction.

For most of a game's development, you're trying to figure out what the game's supposed to be. Only later does it crystallize. Rust doesn't recognize the non-crystalline phase, or rather explicitly rejects it as invalid.


As much as I love Rust I sometimes wonder if I'd be more productive in a simpler language. If I wrote it every day I'm not sure that would be true, but as a hobbyist coming back to Rust sometimes takes me a bit to get back in the zone. Also, still not a fan of async, as it is woefully incomplete and fairly complicated in some use cases. That said, I just can't go back to Go with nil pointers and lack of decent enums/ADTs/pattern matching either. I long for the "in between" language, but with an amazing 3rd party ecosystem as both Rust/Go have.

NOTE: I'm not a game dev


Maybe people will make fun of me, but I've been very happy with Kotlin and Dart. Null-safe, good ergonomics, very fast.

I've tried Rust, sometimes play with C, D, Deno/TS, Nim, Java (actually I still write lots of it) and even some more cutting-edge stuff, like Unison. While they're cool, what I want is a language with really good tooling that gets out of my way without letting me write patently dumb code (like Java lets me use any object without checking for null, when it can be null but the language just doesn't give a shit to help me).

I use Dart when I want to compile to binary executable or use Flutter, and Kotlin for stuff I think the JVM has more to offer, like a server. The two languages are just a pleasure to use, pretty similar but having completely different ecosystems (which is great, you can use the best one for the job!).


I'm glad you found tools/languages that work for you. Kotlin felt a little too much like Java to me. If I stuck with a JVM lang. I'd probably go back to Scala 3, but I just don't like the JVM as a user (just sucks too many resources).


I wrote a Flutter package[0] that wraps the Filament 3D renderer, which I used to make a mini game for a Flutter game competition:

https://devpost.com/software/escape-from-heat-island

(Judging is still ongoing and votes would really be appreciated! It would help me to get more resources to work on the underlying package).

This was my first ever “game” (tech demo, really), and I’m not a game dev, so take this with a grain of salt - but I do think there’s a lot of potential for Flutter/Dart as a game framework. Hot reload makes iterating on game logic very fast, you obviously get the UI toolkit and cross-platform support straight out of the box, and the language itself is (relatively) concise, so it lends itself well to gameplay programming. When you need to get your hands dirty at a lower level, you just drop down to C++ (or whatever engine you can expose via FFI).

I think Google believe that Flutter can nab market share from Unity in casual 2D games (hence their official sponsored competition), but I think it has even more potential than that. In fact, I’ve seen at least two game companies (Supercell and another whose name I’ve forgotten) hiring for people to work on embedding the Flutter engine in various platform games.

[0] https://github.com/nmfisher/flutter_filament.git


I like Kotlin but I find the fact that it lacks a good way to detect and handle possible errors very frustrating. If some function can fail on sane-looking input I'd like to know about it


Flutter sounds so awesome but really want something like OpenGL for it.


OCaml could be that language.

I’m not convinced that ecosystem is so important for game dev. Once you have a simple graphics library, bindings to BulletPhyiscs etc most of the code is custom simulation code with no integrations needed.


I can echo OCaml. It is probably the most underrated language imo. It has great compilation times, macros, type inference, good tooling. With few exceptions, the library ecosystem doesn’t suffer from the same overengineering issues as Haskell and types are kept relatively simple. It has simple runtime characteristics making it easy to optimize performance when needed, although it tends to be very fast in general.


last I used OCaml, the standard library situation was abysmal (compared to say Haskell's), and you had to go searching for third-party "batteries included" crates to cover simple stuff. Has that gotten any better the last few years?


> I just can't go back to Go with nil pointers and lack of decent enums/ADTs/pattern matching either.

Go is simply a badly designed language where the idea of "simplicity" has been maligned and proven bad ideas likes nil/null, exceptions and such have been introduced in a seemingly modern language. One would think that decades of Java, Javascript, etc. code blowing up because of this issues would teach someone something but seems that is not always the case.


Having worked with Go for about a decade now, I largely agree that nil is a pain in the ass to work with, and the language has largely done nothing to make it better. However, Go (mostly) doesn't have exceptions. Ordinary problems are represented by non-nil errors, which are values. Panics exist but really are reserved for exceptional situations (with the number 1 cause being, of course, dereferencing nil).


Nil in go is my biggest gripe with the language. Why keep repeating “the billion dollar mistake” in a relatively newly designed language??


There are a lot of ways in which Go handles nil better than C handles NULL. At the very least, a panic is better than a segfault. And carefully written code can avoid most kinds of nil panics entirely. So I guess the language's authors thought this would be enough to overcome the mistake. But I don't think they went far enough. The very limited type system and the lack of nil-safe operators make it not very ergonomic to write and read such "carefully written" code; design decisions in some key parts of the standard library completely undermine the language's attempts to minimize nil; and then there's the "untyped nil" default value for interfaces, which panics if you just look at it funny.


"code blowing up because of this issues"

I ran into these issues all the time with Java, C++, and Python projects.

But it's just not the experience of running Go in production, which I've been doing for over 10 years now, across many projects with many devs.

In practice, nil checks are just not very difficult to include everywhere. And experienced Go programmers don't use exceptions (panic/recover) almost ever.


What you said is:

1) Anecdotal

2) Based on faith that someone will not forget to do something instead of a well documented mechanism in the language that could block that from the start

Having nil/null to handle empty references is simply very bad design and there's decades of examples why. The correct way is using a two-value type like Option, Maybe, etc. so that the (possibility) of the value missing is actually encoded in the type system


And yet it is incredibly productive. The poster that contrasted engineers with artists got it right I think. Go is an engineer’s language.


> Go is an engineer’s language.

No, Ada/Spark is an example of a good engineers language. Go is a mediocre effort at best. Rob Pikes defence is that it was designed for junior Googlers who "aren't capable of understanding a brilliant language". Yes that's a real quote.


What’s a brilliant language in this context


You'd have to ask Rob Pike. My own example of a brilliant language is Haskell, but it's not without problems.


Apparently anything that offers a type system beyond Go's.


Go is great


I've been hoping to see more languages that compile to Go, it might be the most practical way to arrive at what you want. For example, see Borgo: https://news.ycombinator.com/item?id=36847594


> I've been hoping to see more languages that compile to Go

One could also use Go-Assembler as a cross-platform assembly target: https://go.dev/doc/asm


I keep toying with the idea of creating one. :-)

Borgo looks neat - hadn't seen it before.


Go (!) for it! I'll try it! :-D

If you do, I'd ask you to give some thought to FFI efficiency. I've been wondering if there could be a good way in a transpiled language for the type system to validate at compile time that any pointers passed to C are pinned, so you can safely run the FFI calls with cgocheck=0 which eliminates quite a bit of overhead.


As much as I dislike many of the legacy issues with JavaScript I find TypeScript to be the best language to iterate with. If Rust had GC without need for wrappers like RC though I think that would be my preferred iteration language. I mostly try to write my TS in a manner that would translate to Rust, but that's hard to do sometimes when it comes to memory management.


It doesn't occupy the same space, but the simplicity of Gleam has been very enjoyable to me. It's still quite a young language though, but worth keeping an eye on.


For me, the closest language currently is F#.

The open-source ecosystem is not as massive as Go's or the JVM's, but it's not niche either. F# runs on .NET and works with all .NET packages (C#, F#, ...). If the .NET ecosystem can work out for you, I recommend taking a closer look at F#.

F# allows for simple code, which is "functional" by default, but you're still free to write imperative, "side-effectful" code, too.

I find this pragmatic approach works extremely well in practice. Most of my code ends up in a functional style. However, once projects grow more complex, I might need to place a mutable counter or a logging call in an otherwise pure function. Sometimes, I run into cases where the most straightforward and easy to reason about solution is imperative.

If I were confined to what is often described as a "pure functional" approach, I'd have to refactor, so that these side-effects would be fully represented in the function signature.

F# ticks the enums/ADTs/pattern box but also has its own interesting features like computation expressions [0]. I would describe them as a language extension mechanism that provides a common grammar (let!, do!, ...) that extension writers can target.

Because of this, the language doesn't have await, async or any other operators for working with async (like C# or TS). There's an async {} (and task {}) computation expression which is implemented using open library methods. But nothing is preventing you from rolling your own, or extending the language with other computation expressions.

In practice, async code looks like this:

  let fetchAndDownload url =
    async {
        let! data = downloadData url // similar to C#: var data = await downloadData(url);

        let processedData = processData data

        return processedData
    }
I often use taskResult{}/asyncResult{}[1] which combine the above with unwrapping Result<>(Ok or Error).

Metaprogramming is somewhat limited in comparison to Scala or Haskell; but still possible using various mechanisms. I find that this isn't a big issue in my work.

IDE-wise, JetBrains Rider is a breeze to work with and it has native F# support. There is also Visual Studio and VS Code with Ionide, which are better in some areas.

You can run F# in Jypiter via .NET Interactive Notebooks (now called "Polyglot Notebooks" [2]). I haven't seen this mentioned often, but this is very practical. I have a combination of Notebooks for one-off data tasks which I run from VS Code. These notebooks can even reuse code from my regular F# projects' code base. Over the past years, this has almost eliminated my usage of Python notebooks except for ML work.

[0]: https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...

[1]: https://github.com/demystifyfp/FsToolkit.ErrorHandling?tab=r...

[2]: https://marketplace.visualstudio.com/items?itemName=ms-dotne...


Do you know of or have any shareable (sample) projects implemented in your way of doing F#? It sounds very intriguing to me


While the libraries and techniques I mentioned above seem to be well-known, I couldn't find a good public sample project.

I can recommend https://fsharpforfunandprofit.com/ as a starting point.

If there's interest, I can split some of my code into stand-alone chunks and post my experience of what worked well and what didn't.

I wanted to share some thoughts on here on what brought me to F#. Maybe this can serve as a starting point for people who have similar preferences and don't know much about F# yet.

A big part that affects my choice of programming language is its type system and how error handling and optionality (nulls) are implemented.

That "if it compiles, it runs" feeling, IMO, isn't unique to Rust, but is a result of a strong type system and how you think about programming. I have a similar feeling with F# and, in general, I am more satisfied with my work when things are more reliable. Avoiding errors via compile-time checks is great, but I also appreciate being able to exclude certain areas when diagnosing some issue.

"Thinking about programming and not the experience" the author lamented in the blog post appears to be the added cost of fitting your thoughts and code into a more intricate formal system. Whether that extra effort is worth it depends on the situation. I'm not a game developer, but I can relate to the artist/sound engineer (concept/idea vs technical implementation) dichotomy. F#'s type system isn't as strict and there are many escape hatches.

F# has nice type inference (HM) and you can write code without any type annotations if you like. The compiler automatically generalizes the code. I let the IDE generate type annotations on function signatures automatically and only write out type annotations for generics, flex types, and constraints.

I prefer having the compiler check that error paths are covered, instead of dealing with run-time exceptions.

I find try/catches often get added where failure in some downstream code had occurred during development. It's the unexpected exceptions in mostly working code that are discovered late in production.

This is why I liked Golang's design decisions around error handling - no exceptions for the error path; treat the error path as an equal branch with (error, success) tuples as return values.

Golang's PL-level implementation has usage issues that I could not get comfortable with, though:

  file, err := os.Open("filename.ext")
  if err != nil { return or panic }
  ...
Most of the time, I want the code to terminate on the first error, so this introduces a lot of unnecessary verbosity.

The code gets sprinkled with early returns (like in C#):

  public void SomeMethod() {
  if (!ok) return;
  ...
  if (String.IsNullOrEmpty(...)) return;
  ...
  if (...) return;
  ...
  return;
  }
I noticed that, in general, early returns and go-tos introduce logical jumps - "exceptions to the rule" when thinking about functions. Easy-to-grasp code often flows from input to output, like f(x) = 2*x.

In the example above, "file" is declared even if you're on the error path. You could write code that accesses file.SomeProperty if there is an error and hit a null ref panic if you forgot an error check + early return.

This can be mitigated using static analysis, though. Haven't kept up with Go; not sure if some SA was baked into the compiler to deal with this.

I do like the approach of encoding errors and nullability using mutually exclusive Result/Either/Option types. This isn't unique to F#, but F# offers good support and is designed around non-nullability using Option types + pattern matching.

A possible solution to the above is well explained in what the author calls "railway oriented programming": https://fsharpforfunandprofit.com/posts/recipe-part2/.

It's a long read that explains the thinking and the building blocks well.

The result the author arrives at looks like: let usecase = combinedValidation >> map canonicalizeEmail >> bind updateDatebaseStep >> log

F# goes one step further with CEs, which transform this code into a "native" let-bind and function call style. Just like async/await makes Promises or continuations feel native, CEs are F#'s pluggable version of that for any added category - asynchronicity, optionality, etc..

With CEs, instead of chaining "binds", you get computation expressions like these: https://demystifyfp.gitbook.io/fstoolkit-errorhandling/fstoo... https://demystifyfp.gitbook.io/fstoolkit-errorhandling/fstoo...

Everything with an exclamation mark (!) is an evaluation in the context of the category - here it's result {} - meaning success (Ok of value) or error (Error of errorValue). In this case, if something returns an Error, the computation is terminated. If something returns an Ok<TValue>, the Ok gets unwrapped and you're binding TValue.

I have loosely translated the above example into CE form (haven't checked the code in an editor; can't promise this compiles).

  let useCase (input:Request) =
   result {
      do! combinedValidation |> Result.ignore
      // if combinedValidation returns Result.Error the computation terminates and its value is Result.Error, if it returns Ok () we proceed
      let inputWFixedEmail = input |> canonicalizeEmail
      let! updateResult = updateDatabaseStep inputWFixedEmail // if the update step returns an Error (like a db connection issue) the computation termiantes and its value is Result.Error, otherwise updateResult gets assigned the value that is wrapped by Result.Ok
      log updateResult |> ignore // NOTE: this line won't be hit if the insert was an error, so we're logging only the success case here
      return updateResult
   }
In practice, I would follow "Parse, don't validate" and have the validation and canonicalizeEmail return a Result<ParsedRequest>. You'd get something like this:

  let useCase input =
   result {
      let! parsedUser = parseInput input
      let! dbUpdateResult = updateDatabase parsedUser 
      log dbUpdateResult |> ignore
      return updateResult
   }

  let parseInput input =
   result {
      let! userName = ...
      ...
      return { ParsedRequest.userName = userName; ... } // record with a different type
   }
This setup serves me well for the usual data + async I/O tasks.

There has been a range of improvements by the F# team around CEs, like "resumable state machines" which make CEs execute more efficiently. To me this signals that CEs are a core feature (this is how async is supposed to be used, after all) and not a niche feature that is at risk of being deprecated. https://github.com/fsharp/fslang-design/blob/main/FSharp-6.0...


Thanks so much for your detailed reply. This looks very cool indeed. I've had a couple tiny projects in F# in the past that never went anywhere, but you're describing essentially all the parts in a programming language that I want, early returns, binds/maps, language support for these features, defining your own keywords (not really but kinda with your expressions)

Excited to try this out


Isn't that just Swift/Kotlin?


The problem with languages is they don't compose. Iow, one missing feature needed automatically invalidates the language entirely. Swift is targeted at Apple platforms and cross platform is an after thought. Kotlin targets JVM and while it is cool in concept, I hate it as a user (and Kotlin native isn't near as mature). If were considering something else at this stage I'd probably put my time into F#, but even it has its cons.

NOTE: By "compose" I mean that if I want feature A, B and C from lang X (has A and B), Y (has B and C) and Z (has A and C), but there is no way to get A, B and C in one language without creating a brand new one. I cannot mix and match features from different languages.


They don't compose what? Is this one of those 'monad is an endofunctor' deals?

(me not knowing might be the ignorant bliss that allows me to just do productive things in them regardless)

Edit: I understand from the updated comment

I've long accepted that no one language can offer all of the language features I'd want, but I also question if I'd even want that language if I got it


And if we're still talking about gamedev, F# would probably make the GC too sad.

(When are we getting a low latency collector in the CLR? But I digress...)


Yeah I'm not a game dev and don't write anything with low latency requirements. Almost all my code is in the "thoughput" camp.


It's any of:

Rust + (garbage collector)

Swift + (cross platform support)

Kotlin + (proper pattern matching)

Unfortunately, none quite hit that central sweet spot, IMO


Roc is looking very promising IMO


Well, Kotlin + (proper pattern matching) = Scala


Mojo soon


And Zig maybe?


Yeah I long for a language that has rust enums and pattern matching but none of the async or borrow checker.

Maybe I should just unsafe rust and see how I go....


Have a look at Scala.


I mean.. Java is there. Java 23 is really interesting.


When is Java getting value types? It has been talked about forever now.


And it will still take time to come, the whole engineering problem is how to introduce value types, make classes that are clearly value types like Optional, turn into value types, while at the same time not breaking the endless amount of JAR files in production, when upgrading to a JVM with value types support enabled.

They would get it sooner by breaking the ecosystem, and as Python 3, Java 9, .NET Core have shown, not everyone would be racing to adopt the new shiny thingy.


As a java dev, please break backwards compatibility. Almost all of us are willing to spend time fixing a few lines and recompiling.


As polyglot dev that also does Java, I hope not, last January I deployed yet another Java 8 workload into production.


Yeah but if the older releases still got security updates, it'd be more acceptable I feel.


Java 23 is not final and I'm hoping this JEP makes it into 23:

https://openjdk.org/jeps/401

I don't follow the JEP process closely enough to know if it will be proposed for 23 or approved. But I think it's coming soon.


Good question. I think it must be this one: https://openjdk.org/jeps/401

I can't see it in the feature list of JDK 22 or even 23. Maybe it'll come in JDK 24?


Rust is in a separate class of go and swift and kotlin and etc. The class it competes on is pretty C, C++, and itself.

Yes it's easier to write trivial code in python than Rust. Yes it's harder to manage memory manually than it is to let a gc handle it. I don't see the point.

Rust is a systems programming language. It's hard to write a server in it than it is to hack something in node, but it will also be faster and more reliable. Conversely, it's easier than writing it in C++ or C, while still being more reliable. That's the whole value proposition.


Swift has a systems programming subset, and an even smaller embedded programming subset.

Of course "smaller" is significant here; it has less features and you might not like using it anymore.


For Apple, Swift is their Rust, regardless of the world outside Apple's ecosystem thinks about it.

It is clearly stated on Swift's documentation, they already hold a couple of talks at C++ conferences about code migration, and is one of the reasons why nowadays they mostly focus on LLVM contributions instead of clang.


Oh my, this has to be my favorite quote in a blog in a long long time.

"... many if not most of the problems don't go away if one isn't willing to constantly refactor their code and treat programming as a puzzle solving process, rather than just a tool to get things done."

I have thought it was just me for a long time, but many of the popular styles of programming that we push definitely seem to require constant refactors in the pursuit of a solution. And I definitely see more tire spinning for the sake of the build than I do for whatever it was folks were building.

Great quote.


It is a great quote!

I think multiple refactors in pursuit of a solution can be a good thing. As your thinking/design evolves, so does the shape of the solution.

The main problem, I think, is that this goes against specifically gamedev. In gamedev whether the solution is the best or more resilient is secondary to extremely fast iteration and delivering something. Like the author says, sometimes "clunky but good enough for now" is what you need to get it over with (for the moment) an iterate rapidly over the gameplay elements you should spend most of your time on. Gameplay, not correctness or reliability or maintainability, is the most important thing in a game.


When I set out to learn Rust about a decade ago, I chose to write a game - a clone of "Empire" that I call Umpire.

It's a different task to re-implement an already-designed language rather than designing and implementing at the same time. Nevertheless I have run into a number of the difficulties mentioned in the article, and arrived at my own solutions - foremost passing around global UUIDs rather than actual `&` references, and enforcing existence constraints at runtime.

I've experienced the protracted pain of major refactors when assumptions baked into my data model proved false.

In some regards these refactors wore some of the shine off of Rust for me as well. BUT I'm still glad the game is implemented in Rust, exactly because of Rust's dual emphasis on safety and performance.

The AI I'm developing requires generation of massive quantities of self-play data. That the engine is as fast as it is helps greatly.

Rust's strength in ML means my AI training and game code can share important types, ensuring consistency.

The effectiveness of Rust for writing CLI tools (mentioned in the article) has lent itself to a number of game-specific command-line interfaces that are of high quality.

Rust's memory safety became critical once I decided to network the game. I don't want `umpired` to be any more exploitable than it needs to be.

My constraints have been very different than the OP's; obviously it makes sense for their studio given their experience to move away from Rust. But I think Rust still has a place in games.

* https://en.wikipedia.org/wiki/Empire:_Wargame_of_the_Century * https://github.com/joshhansen/Umpire


> Rust's strength in ML

Most of ML frameworks that I know are implemented in Python and C++. I tried looking at ML in Rust a few years ago and didn't find anything useful. Has it changed?


You can use libtorch directly via `tch-rs`, and at present I'm porting over to Burn (see https://burn.dev) which appears incredibly promising. My impression is it's in a good place, if of course not close to the ecosystem of Python/C++. At very least I've gotten my nn models training and running without too much difficulty. (I'm moving to Burn for the thread safety - their `Tensor` impl is `Sync` - libtorch doesn't have such a guarantee.)

Burn has Candle as one of its backends, which I understand is also quite popular.


I agree with a lot here, but I think the author is overplaying "get things done fast" or underplaying "stable, performant code". I like indie games, but I've played enough games that crashed if I look at them wrong or chug despite being low poly early 2000s things that I now hesitate to buy indie games. Some of the examples seemed like maybe rust was preventing a weird unexpected feedback or clobbering iteration state or whatever.

I don't think the author disagrees here and is mostly talking about awful runtime alternatives (refcell, etc) but I just wanted to say it for balance.


> As far as a game is concerned, there is only one audio system, one input system, one physics world, one deltaTime, one renderer, one asset loader.

I thought this way when I was doing Java dev around 10 years ago. I thought it excused the singleton pattern. I was wrong!

You should always be able to construct an object by explicitly passing dependencies to it. Especially for testing.

It really is no fun if your renderer starts talking to your asset loader and timer directly.


People should get more into integration tests. If you start out thinking you need to separate everything for unit testability, you instantly get architecture astronaut-ism, where your architecture is entirely based on fake testability instead of the thing it's actually meant to do in production.


All praise our almighty lord that is dependency injection


No. You don't need ability to DI every single functionality. And you can still do DI with global state, just with less granularity.


Unit testing in Game dev is largely useless.


> wait I can't add this new thing because things will no longer compile, and there's no workaround other than code restructuring

I definitely think that's a great feature. I want to learn on day 2 that the design is a dead end, not on day 101 when I ship on day 100 and there was a race condition on day 2 I never noticed.

But the thing about gamedev (I guess - I'm not a game developer) is that the code being great and doing what you hope it will do isn't 100% of the job as it is in other disciplines. In gamedev you may want to run the code, and the way it runs (fun, feel, whatever) might be bad even though it compiles, works according to a spec and so on. So while I'm usually happy to write code for a week and never run it - game development feels like it's all about the iteration.

That said, game development is also game engine development. And Rust seems absolutely perfect for engine development (you need "fearless" concurrency and performance and there are zero mainstream languages that will do that other than rust). For people who feel it's too rigid or hard to iterate with perhaps hybrid could work. Like Rust + Lua or something sounds like it could be worth trying.


I think the point is that there are few "code related" dead ends in game code with good game play that can't be dealt with using enough effort. There are plenty of "game play related" dead ends that no amount of clean code can help out.

To that end, whatever can help you explore the game play the fastest is what you want.


Quick iteration perhaps isn't the biggest strength of C++ either. Rust does have some friction when it comes to "I'll try this with a dirty impl and if it flies, then I cam make a clean one later". I guess if that friction is worth it will depend on how much faster you get on other things, e.g. refactoring without spending a lot of effort worrying about introducing hard-to-spot bugs like races, or - worse - having to spend valuable time fixing those bugs instead of adding fun to the game.


This is fair. Always curious to see why so much effort is used to move a game's codebase into a single language codebase. Seems far more useful to move the core of a game's engine to an interpreter loop and build on top of that, with all of the affordances one usually gets from that.


> game development is also game engine development

not necessarily. Loads of games and game developers do not engage in any engine development at all, they just use an off the shelf engine and make their game, treating engine developing about as close as web devs treat database development.


I've become wary of commenting on articles that mention the pros and cons of various languages, but I still find it strange that so many people are so strongly focused on what their favourite language can do (usually better than others), instead of the project they're working on. When it should be the other way around.

The joke he mentioned about having 50 engines written but only 5 games certainly rings true and I don't think the language is the main problem preventing people from getting their projects done..


The hardest part of a project is finishing it. I think the main issue is the fun problems to solve happen very early in the project and once those are done it becomes incredibly tedius and boring and I usually lose focus until the project dies. Its difficult to maintain motivation.


Interesting, I almost find it the opposite right now. Learning the engine is a pain in the ass -- it's not particularly hard, just tedious to learn all the APIs and quirks -- and then when you're initially building the thing, "it's not fun yet" for a quite a while. But then once you have the fundamentals down, you can add more abilities and characters and other features, that's the fun shit.

I was working on the AI last night, and since I already had one functioning AI agent, it was pretty easy to spin up variations that behaved in moderately different ways, which was very fun!

I've only been dabbling though, and still sort of in the prototype stage, not quite a full game yet but getting there. Maybe I'll feel more like you suggest deeper into development.


I think the period between having a playable alpha and a polished release is the part people hate.

Or just grinding out content to make the game longer.

I hope you enjoy the process and succeed as a game dev.


Thanks!

Right now, the idea of creating new content being "grinding" baffles me, but that's as a hobbyist developer of course. I'm sure I'd feel different if I was in a big company doing it.


Right. If someone could come up with a pill or something to maintain motivation and make all the bugs and hairy annoying details feel fresh again, just like the feeling of starting over, I would certainly part with my money. But there's no such thing unfortunately.


Adderall?

I'm only half-joking :)


IMO one language can sidetrack you more than another. It might not be the main problem but a language that gets in the way for your usecase causes you to focus on the wrong problem. Making a good game is really hard and really needs you to focus outside the tech.


Totally agree

My unpopular programming opinion: languages aren't that interesting to me

I'm much more interested in the problem being solved and algorithms in the abstract sense


Yep.

Bikeshedding, how many angels can dance on the head of a pin, and https://xkcd.com/927/ are some ways of loosely describing what you said.

https://en.m.wikipedia.org/wiki/How_many_angels_can_dance_on...

>but I still find it strange that so many people are so strongly focused on what their favourite language can do (usually better than others), instead of the project they're working on.

Yes, if they are so convinced of that, why are they not back in the office or at home, working busy as a beaver on their project, using that great programming language, the benefits of which they extol?

Smells fishy to me.

Oops.

Do people see what I did there? ;)


This is a very brave post to write given how incendiary responses to rust criticism can be, but this matches my experience entirely.


I think I just read about 10 versions of this comment on this page, and definitely not a single response to the criticism that could be described as incendiary. I don't think I even saw a single comment just now that fundamentally pushed back on the premise of this article, let alone in an incendiary way. It's early yet, and maybe this thread will look very different in a few hours though?


The author maybe somewhat hit on the reason for this in the article, where they mentioned that they're already seeing some of the rabid, toxic, Rust proponents already moving onto the next "hot" thing and doing their thing there. So maybe after a few years of Rust we've arrived at the turning point now where enough of those types of people have finally moved on and the Rust community has significantly changed.


I don't think that's the case. N=1, but I'm usually a quite staunch, and occasionally incendiary, proponent of Rust, because the arguments against it / criticisms of it I usually see seem fundamentally misguided or even disingenuous to me — whereas in this thread, I've been only agreeing, because the criticisms are fair (I agree Rust isn't built for, and is quite bad at, prototyping, fast iteration, flexible code, etc), if I think a bit overblown (I think many of the patterns the author complains about being forced to use like command lists and generational arenas are very good). That could be the difference you're seeing, IMO.


Perhaps because the article is about how Rust isn't the magic bullet to everything, and a few people have commented agreeing with the article, others feel more willing to comment their own Rust isn't perfect opinion as well.

If you go into the comment section of a pro-Rust article, where the first few top-level comments are also pro-Rust, the responses to people expressing a negative attitude about Rust tend to (in my experience) be different.

This phenomenon certainly isn't exclusive to Rust (or HN). It happens all the time, especially when a prolific commenter is among the first few comments. It can set the tone for the entire comment section.


Sounds like a forum that scrambles comments could be interesting.


I assure you it happens, but the people targetted this way usually quickly learn what is ok and what isn't to say, especially on rust's reddit. If you wanna see examples, look at my reddit profile (same username). I dared to say bevy was full of hype and false promises and tat the money they get would be better spent elsewhere. And look at the hate i received.

One way i've seen to reduce this is prefixing any posts with "I am not criticizing any engine in particular" even if it's blatantly obvious because the criticism only applies to one.


I guess I interpreted the comment as meaning that the incendiary responses were going to be seen here. I would expect incendiary responses to anything I post on reddit...


> incendiary responses to rust criticism can be

I've not experienced this. Do you have examples of the rust community flaming someone for having negative opinions about the language?


Based on what I've seen, various forms of censorship and suppression are often employed in such cases, rather than outright "flaming" or other discussion-based approaches.

It really depends on where and how the discussion is taking place, and what censorship methods the website/platform/medium involved offers.

Sometimes users are just outright banned or shadow-banned, if those happen to be options.

Sometimes forum threads, bug reports, or comments are deleted.

Sometimes the discussion remains accessible, but is stifled in some way. This includes closing/locking forum threads or bug reports, or otherwise severely limiting participation in such discussions to a very small and isolated group of people. If down-voting/reporting systems are present, sometimes they're used to limit the visibility or prominence of such discussion.


Ok, do you have a concrete example of this?


I haven't rigorously tracked all of the instances I've seen of this happening over the years, but I've tried to quickly find some more prominent examples for you.

This bug report, for example, has various "This comment has been minimized.", "rust-lang deleted a comment from ...", "rust-lang locked and limited conversation to collaborators" interference:

https://github.com/rust-lang/team/pull/671

A Reddit thread discussing the situation from that bug report mentioned above has numerous "[removed]" comments and down-voted comments:

https://old.reddit.com/r/rust/comments/qzme1z/moderation_tea...

When Rust is discussed here, it's common enough for reasonable and relevant Rust-related comments to be voted down, sometimes severely. These threads have some examples I quickly found via a search of high-activity Rust submissions:

https://news.ycombinator.com/item?id=24334731&p=2

https://news.ycombinator.com/item?id=24343867

https://news.ycombinator.com/item?id=23802674

https://news.ycombinator.com/item?id=26812047&p=2

https://news.ycombinator.com/item?id=29488336

https://news.ycombinator.com/item?id=11340100

https://news.ycombinator.com/item?id=24337001

Here's an example of a recent submission on this site for an article very reasonably and thoroughly questioning Rust. It got some attention, and now it's currently marked as "[flagged]":

https://news.ycombinator.com/item?id=40091427

Keep in mind that strict "moderating" (ie, censoring) has been an integral part of the Rust community's identity for many years now via its Code of Conduct and Moderation Team -

https://www.rust-lang.org/policies/code-of-conduct

https://www.rust-lang.org/governance/teams/moderation


Thank you very much for digging these up. I don't have anything more to add these are good examples of bad behavior.


Not parent, but take a look at these:

https://news.ycombinator.com/item?id=32117148

https://news.ycombinator.com/item?id=39641552

These threads are absolutely painful to read. The Rust community/leadership would not do anything about it because Rust thrives on such "devotion".


Damn, that second one is excruciating to read. I wonder if they're aware of how they put people off the Rust ecosystem with their rabid defensiveness.


Check this response to the article within these HN comments: https://news.ycombinator.com/item?id=40177534

Not actually flaming but quite condescending towards the article writer. Not even properly reading the article and coming to conclusions.

This is on HN which is generally more neutral towards Rust. I imagine in Rust circles these types of responses would come out a lot more.


I'd go read their mailing list and Reddit forms; especially when people run into issues doing stuff that's very simple in other languages. Never seen a more toxic programming community.

Hopefully they calm down, or really get drown out, once there are a real number of jobs for people using Rust. Right now the evangelists outnumber the rank and file who are just using a language to get work done.


I'm active on both and have not seen this behavior.

In fact, my experience has been the polar opposite, the rust community has been very friendly and accepting of critique.

So again, I'm going to ask for an example of rust language fanatics frothing at a criticism. If it's such a community problem this should be easy to find correct?

Here's the OPs article on /r/rust and it's both got a fair number of up votes and the top comments are all really positive towards this article. That's what I've seen at typical in the rust community.

https://www.reddit.com/r/rust/comments/1cdqdsi/lessons_learn...


It may not be flaming, but the author brings up a particular quote repeatedly. "You just don't get it/have enough experience with it yet."

I've seen this everywhere. This is an obnoxious, lazy thing to say to someone. It's a go to for many "enlightened" languages that have small ecosystems and something to prove. The only response is to ignore it entirely or, like the author did, dedicate years of your life just to see if there's something to it. This is not okay. Life is short, and we lean on other developers experience to keep us from wasting our time.

If someone posts a topic wondering if X language is bad for something, it's an earnest question. Not a time to flex your dedication to the cause.


If it helps, they can't possibly be as toxic as Lisp programmers used to be, where more or less any online conversation would start with someone new asking a question and Erik Naggum replying that they were a moron who should die.


Why is it that, say, C programmers, as such, don't get painted according to what has historically gone on in the comp.lang.c newsgroup?


Probably because there's so many more of them. Maybe because being called not a real UNIX programmer feels different from being called a Blub programmer.


Maybe I should ask: why should someone interested about Lisp today have to hear stories about some Erik Naggum who posted to a Usenet newsgroup, and died 15 years ago?

Let's assume that the newsgroup is important. Legendary Lisp hacker Alan Bawden posted there just last week or so. Nobody ever mentions him.


Every other one I've met in my life was nearly as unpleasant. Fewer death threats but they clearly all thought they had 200 more IQ points than you because they could write a macro. Thus the term "Lisp weenie".

In this case I think people should learn from history and that specific examples are the best way to do that. It's, like, effective pedagogy or whatever.

Supposedly Clojure people are nice though.


On the flip side, if you reveal to people that you are involved in Lisp, the experiences are varied.

All too often you get ignorant remarks, jokes about parentheses or "contents of address register", or outright ridicule.

If you say anything frank at that point, chances are good you might come off as an asshole.

Just smile, nod your head, change the subject, and never return to it.


"toxic" is also to generalize from one person / one forum, to an extremely diverse group, many which never used Usenet


Lol, I had a similar example with perl as a young teen programmer.

They've gotten way nicer.


That does not mirror my experience.


Rust hasn't had a mailing list for roughly a decade now...


yes, keep reading this section


You were flagged for no such thing.

You were flagged for a pointless quip about "woke"ness. Other people repeated more civil and reasonably argued forms of your same point about the language and its community and received no such downvotes.

No need to play martyr.


[flagged]


If you blame the wrong reason for downvotes, expect to be corrected. Your complaint here is invalid.

Try criticizing Rust next time, if you want to demonstrate that criticizing Rust gets downvotes. You were throwing out insults in your very first comment.


This is a textbook example of poisoning the well. [0] We see it used in every discussion about pros and cons of a language on HN.

It's some variation of "People who like this language can't handle criticism/are part of a cult/etc." The idea being that this will preclude anyone from responding to a criticism, because that would confirm the comment.

[0] https://www.logicallyfallacious.com/logicalfallacies/Poisoni...


I find that Jonathan Blow ranting about Rust game development here https://www.youtube.com/watch?v=4t1K66dMhWk. He adds interesting perspectives to the discussio, how the language makes the Rust game developer resort to arrays and their so called Rust point of views.


The blog post reminded me of a quote from Jonathan Blow as well. I forgot the exact wording, but basically he said that Rust makes you treat every state of the project to be production ready (e.g. free of memory safety bugs), but in game development, most of the time the project needs not to be production ready, and for a good reason (rapid prototyping). You just have to fix the really bad things (crushing bugs) before shipping.


There is a better, more recent clip of him where he explain 'would we haven chosen rust, the witness would never been finished'.


Well the problem with Jonathan's argument here is that he's spent the past decade mostly ranting about Rust and working to make a perfect game programming language, instead of making games.

So it turn out that even if he's opinion on Rust is correct, he would still have been much more productive using it than trying to build his own language for a decade…

(But he already shipped his masterpiece and he's a millionaire so he gets to chose his full time hobby as he wishes)


He is actively developing his new game in parallel to creating the language.

Not to mention smaller projects like 'Braid- anniversary edition'.


Isn't the traditional advice that if you try to write both a game engine and a game that you'll get neither?


Braid and The Witness, his last two games, both used custom engines written in C++.


Then it's a good thing not everyone listens to traditional advice.


I mean, it's not like there are no games in existence that shipped with a custom engine.

Even in hindsight it's hard to judge whether building your own engine was good or bad decision, and we are nowhere near "the hindsight" level of knowledge.


Yeah, and it's taking him YEARS to implement a simple grid-walking Sokoban clone, and Braid anniversary edition had to be delayed.


Last I heard, the Sokoban game has a ridiculous number of puzzles on it. Can't find a source but I seem to recall hearing that it would take 400+ hours to finish it all. So.. I don't think it's entirely unreasonable it's been taking this long.


>spent the past decade mostly ranting about Rust and working to make a perfect game programming language, instead of making games.

I wish. As far as I can tell he made a single hour long video shitting on rust and now he's the enemy of the cult. That's hardly spending the last decade sitting being mad over rust.


Why should he be making more games, instead of building his vision of a perfect language?

A machinist who retires from running lathes and goes into the lathe-making business has reached the pinnacle of that profession.


Writing games in C++ feels horrible which is a large part of he wrote the language.


I mean, he does like a good rant lol. But this seems like a bad take. The witness came out ~8 years ago, and Braid came out ~8 years before that. Braid Anniversary is launching next week, he's actively developing his language and next game (occasionally streams). "he's just resting on his laurels now" I think is clearly wrong


Interesting read! If I had not picked Elixir + Godot for the multiplayer game I'm making, then I would've gone with Rust for the whole thing. The old naive version of me would've tried doing it in C++ + Unreal but I knew better this time around.

I think multiplayer game devs are sleeping on Elixir! It has made the network side of things so much easier and intuitive with fast prototyping and built in monitoring - so many lifetime issues are easily found and addressed. I'm pairing Elixir with Godot, Godot is used for the frontend game client. And its crazy because I thought the game client part would be the "hard" part as it would be a new skillset to learn, but Godot makes the actual game part very easy. GDScript is easy to learn, and the way Godot uses "signals" is very similar to the backend code in Elixir with message passing so its not a huge cognitive shift to switch between server/client code.

I get that BEAM doesn't lend well to highly computational tasks, but I just don't see how thats an issue for many types of multiplayer games? If you aren't making some crazy simulation game, then most of the backend computation is more around managing client state and doing "accounting" every tick as inputs are processed. The most computational task I've had is server-side NPC Pathfinding, which I was able to quickly offload onto seperate processes that chug along at their own rhythm.


I would love to read more about these Godot/Elixir adventures. Do you have a blog or a repo I could look through?


No blog unfortunately, the notes are all on paper. I have github page for the game where I ramble a bit: https://github.com/mikhmha/SWARMMO

But I'm planning to release the game for testing next month! Its a browser "MMO" game too so its going to be easy to try out. And then I'll have time to write some more detailed technical notes online.


> The most fundamental issue is that the borrow checker forces a refactor at the most inconvenient times. Rust users consider this to be a positive, because it makes them "write good code", but the more time I spend with the language the more I doubt how much of this is true. Good code is written by iterating on an idea and trying things out, and while the borrow checker can force more iterations, that does not mean that this is a desirable way to write code. I've often found that being unable to just move on for now and solve my problem and fix it later was what was truly hurting my ability to write good code.

The latter part of this is true for any strongly statically typed language (with Rust expanding this to lifetimes), which negates the beginning of this paragraph -- once you get things compiled, you won't need to refactor, unless you are changing major parts of your interfaces. There are plenty of languages that do not have this problem because it is a design choice, hardly something Rust can "fix", it's what makes it Rust.


This is the opposite of my experience with other strongly typed languages. They're easier to refactor, because when you change the types, say you delete a field, everywhere that field was used is a compile error. Clean them up and on your way.

The borrow checker is an entirely different beast. People forget that safe Rust allows a subset of programs. Finding the subset which does what you want can range from easy, to hair-pullingly gnarly, to provably impossible.


The author is asking for a "give me a break" feature. I would say that in a strongly typed functional language, this is akin to mutating objects. The author seems to wish for an unsafe option to locally turn off the borrower check. Is it something Rust could not offer?


You can always dip into raw pointers and come back up for a reference, or eg. do a transmute to a static lifetime. Absolutely not okay according to the language rules but it will compile and will probably also run without an issue if you're not doing something wrong in your code (eg. Wanting to keep a reference to a string while you also mutate it.)

I'm actually surprised that the author didn't seem to consider this much of an option.


My experience is that the ecosystem is a mess, have hit winit, wgpu, and countless bevy bugs, iteration times are abysmal, documentation is nonexistent. In the time it would take me to make a game in popular Rust tooling I could build the game and engine from scratch in C and also have something that would crash less.


> documentation is nonexistent

You know, I think this point is important to get right: there are generally docs, Rust does a very good job of making it easy to write docs.

What doesn't always exist are guides that explain how to piece things together. Sometimes you wind up needing to really know the inner platform to piece together things in Rust, and while I love the language, this is one area where the community could improve.


Yeah, in general with large and Powerful libraries or frameworks, I find that pure API documentation, even if very thorough and well explained on an individual function or data structure level, is just simply not enough. I also want a reference manual type experience, where that API reference is integrated with explanations of the reasoning behind how the framework was designed, and how to actually think about using it, and examples of many common things you might want to do that integrate well together. The gold standard for this in my opinion is the opengl Red Book.


This, and the fact that correctness and safety and stability aren't quite as important in game development, or even game engine development, as they are in other fields where rust is applicable, is why I purposefully happily use the powerful, futureful, well established, copiously documented C or C++ libraries I need, instead of tge Rust alternatives, for almost everything. It works extremely well for me because I get to leverage the power and amazing ecosystem around things like dear imgui or sdl2 or opengl or physx, while being able to use rust, which grants me essentially a cleaner, even more modern version of C++ with all of the features I love from ocaml, in a way that restricts any weird crash or memory safety errors to the places where I interface with the lower level libraries, and sometimes not even there, depending on how high level the bindings are. It's honestly pretty nice for me.


   By the time the Rust developer is finished with their refactoring, the C++/C#/Java/JavaScript developer has implemented many different gameplay features, played the game a bunch and tried them all out, and has a better understanding of which direction should their game be taking.
Man, slower than C++, that's pretty damning.


In my experience, fundamentally when you're starting a software project, you need to make a strong up-front decision between two things:

1. I am using technology in order to build this thing.

2. I am building this thing in order to use this technology.

Developers often fall in the (2) camp but don't admit it. There's an allure to using the new, sexy tech that will solve all their problems, whether that's Rust, Kubernetes, LLMs, etc.

If you're in the (1) camp, you should stick with what you know; and if you know that what you know isn't enough to build the thing, you should use whatever is most common and straightforward, not something off the beaten path.

Games seem to be the biggest trap, because solo devs often end up building a game engine when they set out to build a game. If you really want to build a game, just use Unity/Unreal/Godot, I promise it'll go better for you.


I know rust, I don't know game development (I've dabbled slightly). If I choose to build a game I either need to make it work in rust* or I need to learn a new language (Unity -> C#, Unreal -> blueprints, Godot -> gdscript).

So your advice to "just use Unity/Unreal/Godot" is the opposite of your advice "you should stick with what you know" in my case. I suspect the former is good advice, and the latter is therefore wrong.

* For the sake of argument, we can pretend I only know rust. In reality I know a fair number of other languages as well, but the list doesn't happen to include C# or "random game engine specific scripting language", which seems to be the options if we're going with an established engine for big 3d games.


This falls in the "you know that what you know isn't enough to build the thing" bucket, presumably. Even if you're a Rust expert, do you know how to manage game asset content pipelines? Sound and music? Have you done graphics programming at all in Rust? How are you going to store levels in your game, and how are you going to make them? How are you doing multiplayer? etc...

You're going to have to learn something new, and it's a bit of a judgment call, but picking up C# or gdscript given that you already know programming should be straightforward compared to re-implementing all of those things yourself in Rust.

Unless, of course, you do know a bunch of great Rust game development libraries that solve all those problems--in which case yeah, building a game in Rust might be the best choice. It's not impossible!


> but picking up C# or gdscript given that you already know programming should be straightforward compared to re-implementing all of those things yourself in Rust.

Right, this is practically my point. I suspect that the tools available from those languages mean that learning one of them would more than pay for itself in the course of developing a (single) game. Many many times over really.

Like, yes, I've dealt with both sound, basic graphics programming (though I'd need to learn a bit more to make a modern looking 3d game), networking, ... in rust. If I had to program my sound system and graphics engine from scratch myself I'd do it in rust (and I believe I'd be more productive in rust than I'd be in <other language> while doing so). But I don't have to do everything from scratch, and the best not-from-scratch versions aren't in rust, and the cost of switching to something I don't know just isn't that high.

Also OP is definitely right that rust has some anti-features that would be pain points for game development.


Perhaps the more appropriate advice is : Use the right tool for the job.

Use C++ for writing a high performance library or a database engine.

Use Go or Java for writing a server.

Use C for writing a kernel module.

Use shell scripts for automation.

Use python for trying out ML ideas or heavier duty scripts.

Use Rust for ... I'm not quite sure what it's the right tool for yet. I suspect it's trying to become the right tool for all of the above and not succeeding much in practice.


> a high performance library or a database engine

> a server

Rust is a great tool for these. The focus on performance and reliability (versus fast iteration) is a perfect fit for these domains specifically.


Server, I think you could be right.

Database engine, no way. Look at the internals of modern ones. Very very intricate pointer based data structures that you'll pull your hair out replicating with Rust.

Again, it's not like it's impossible. You can certainly accomplish it by treating it like a puzzle but using the right tool will have better results.


Use Rust if crashes or memory bugs are not an option. For everything wasm, Rust is much more pleasant with good libraries than the competition.


Wasm seems more convincing to me than the magical 'no crashes or bugs' promise.

Here's my wasm use case: tell me how I can use Rust.

I have a command line tool written in C that ..say.. takes strings and outputs strings.

How would I go about making a usable REPL out of this in Rust and wasm without rewriting the tool?


But when you look at the disaster that was c++ for cloudflare, and the switch to rust.

This is precisely the argument given against rust for video games: too much typing induced by memory safe, which is too restrictive.

Is there any use if your c code works, the advantage of rust over wasm is the easy-to-use packages (which is a pain in c++), and the ease with which you can make a wasm project with wasm-pack that generates the wasm, js and ts interface.

There really are a lot of libraries that support wasm, it's even a problematic point raised in the article on bevy, with wasm support (so webgl) limiting the api.


It's C++ replacement just not yet there for game dev (maybe never for game dev)


A C++ replacement must have really strong and seamless C++ interop to be considered by anyone currently using C++. You can't have a C++ replacement by ignoring existing C++ users and libraries, no matter how good the the language is.

Swift from Apple and Carbon from Google are stronger contenders at this point.


No language from Apple / Google / Microsoft / whatever can ever be a serious replacement for C++. When the development of the language is dominated by a single entity, the risk that the interests of that entity override those of other users is simply too high.

Vendor-specific languages are fine, if you are developing something for that vendor's ecosystem. But if you don't want to lock your code to a specific ecosystem, an independent language such as C++ or Rust is a better choice.


C++ is only independent on the surface, as all the big players seat at WG21, and it goes where their votes say it goes, plus what actually gets implemented into compilers (none of them is 100% ISO compliant, each one has minor compliance issues).

Same applies to C, stuff like C23 is decided by who gets to join WG14.

In both cases, someone has to buy the final standard from ISO.


I don't disagree.

Pragmatic considerations tend to win over ideological ones though.

Consider Java's success.


I was thinking more about pragmatic issues, and Java is a good example of them. It's a widely used language, but it's also a huge failure.

25 years ago, Java was supposed to be the new general-purpose language you could use for everything. Universities rushed to teach it to everyone. There was a lot of initial success, but then Java started losing ground. The direction the language was going was not good for many applications. And then lawyers got involved, which didn't help.

C++ is a general-purpose language. It's widely used, because it's widely used. The language is good enough for many tasks, and you can probably find the libraries you need and people familiar with the language. If you work in a niche with no specific reasons to use a particular language, C++ is often a good choice.

Rust is not there yet, because it's a new language with limited library support. But it does have momentum. The biggest threat to Rust as a general-purpose language is probably async. When there are strong interests to develop the language and the ecosystem for specific applications, other niches often suffer.


Powering 80% of the mobile phone market, Amazon infrastructure, the IDEs everyone around here likes to rave about, is good enough success.

Additionally, everyone and their dog seems keen in replicating Java application servers with Kubernetes and WASM.


Learning a new language is basically trivial relative to the effort of bootstrapping everything yourself to compensate for a lacking ecosystem, or the effort of banging your head against the fundamental unsuitability of a tool for a job.

Anyone who's learned one or two languages should be able to pick up the basics of any of the standard ones pretty much instantaneously.


Exactly


When all you have is a hammer, everything looks like a nail. When all you have is programming expertise, all your game production obstacles look like programming problems.

I think everyone in games has met an “engine person” who spends a lot of time iterating on tech, but never quite getting to the creative expression that got them in the game. I think part of it comes a bit from mythologizing breakthrough games like DOOM, where cool technology made something completely fresh. We begin to think that emulating id Software is how you make compelling art, ignoring the latter half of Kushner’s novel.


Well said, I am weary of all the 'game' programmers that just fetishize working on the tech and particularly rendering.


People like to program on tasks that aren't given to them, to practice their craft in a less restricted form than professional life usually allows. So they make things for themselves and its natural that those things are what they are familiar with and enjoy. So you often meet the programmer, almost never from the game industry, who wants to make his or her own game engine. It's about as likely to be a productive endeavor as making your own spreadsheet program.


> you need to make a strong up-front decision

Can one always realistically do so? I suspect the underlying unspoken assumption is that one must be ideally informed about all the possible potential pitfalls and gotchas they may face when using any given technology. Aka having a very good (ideally, perfect) knowledge of the technology and its surrounding ecosystem.

It wasn't just once or twice when I've picked some very promising library or tool only to learn something isn't exactly as I hoped (or as it was advertised - docs can lie, too) after I've already spent some non-negligible time implementing something with it. Save for some teenage keyboard mashing some decades ago I'm not a game developer, but I suspect this is universal experience no matter the niche.

> Developers often fall in the (2) camp but don't admit it.

There's also a mixed approach, where people admit "I want to build this thing and use this technology, and I suspect they're a potential good match so I'm gonna try both at the same time". Any even slightly creative person must have an urge to learn and explore new things, even if they aren't exactly necessary for a task at hand. Checking on the promise, if it holds true - sometimes it does and you get a new tech you love, quite frequently it doesn't and you have a bad day.


> Can one always realistically do so? I suspect the underlying unspoken assumption is that one must be ideally informed about all the possible potential pitfalls and gotchas they may face when using any given technology. Aka having a very good (ideally, perfect) knowledge of the technology and its surrounding ecosystem.

Exactly. You need to make some choices up front about program design despite rarely having enough information to make the correct decision, nor enough time to evaluate alternatives in detail. If we had known 15 years ago what we know now our then green field project wouldn't have been done this way - but we are still discovering things that the decisions we made 15 years ago are making hard to do today. That is on top of all the existing things we know are wrong that we often cannot feasibly correct.

You have to make choices. Some of those choices will be impossible to undo without starting over after a while. Some of the negatives will take a decade to figure out. There is no way anyone sane will give you enough time to figure out what those negatives are for each choice - even if the did you will be dead before you finish.


I'm in the 1 camp. After several decades of C++ I know it very well. Well enough to find some of the things that those Rust and Ada people are saying about their respective languages intriguing. I'm at the point where I need to spend some time with each to see how/if they work with my problems in the real world. Because eventually you have to admit that while you can drive a screw with a hammer there are other ways and maybe it is time to learn how to use a screwdriver.


This is what I like to call the artist vs engineer dilemma.

1. An engineer solves problems, learning and using tools. 2. And artist learns and uses tools, for the sake of it.

Neither is wrong, and some times they benefit each other. I believe much of academia and research is heavy on the artist side. You just have to be clear on which one you are at any given point.

I'm not gonna use assembly for my $JOB where we need some basic web backend. That's not gonna stop me from trying on my free time tho.


How about "I'm building this thing and I want it to enjoy the unique combination of performance and memory safety offered by this technology"?

It's certainly close to (1), but also a perfectly rational way to be a The Rust Way fundamentalist avoiding refcounting and unsafe who appears suspiciously (2).

The "rewrite in Rust" meme surely does not come from thin air and there is certainly some skill honing and challenge seeking involved. But perhaps a rewrite ever couple of decades isn't all that bad? And if it isn't, could there be a better time for it than "in Rust"?


Advice should be - you should do what you like and try to align it with your client (employer) - otherwise you'll burnout rather quickly.


It's not easy to find an employer that'll agree with your preferences. I chose to find fun and value in the way I collaborate and reach end goals, not the tools. If the tooling in my dept sucks, I just treat it like a challenge and deal with it.


Healthy.


I've found Rust very pleasant for building little games. I've mostly been using SDL + a my own little shim so I can target web through wasm & canvas.

I had professionally worked with C++ for a long time so getting comfortable with Rust wasn't too bad.

https://www.bittwiddlegames.com/ You can see a web build at https://www.bittwiddlegames.com/lambda-spellcrafting-academy...


I think these comments are fair. It's true that Rust is rigid.

I've had great success with Rust, but on projects where I knew exactly what I needed to build. Rust's focus on code correctness is great for maintenance of projects, where the priority is in keeping them stable and not causing regressions.

So while I'd say Rust is pretty quick for refactoring of something like a device driver, it's far away from the hot-reloaded time traveling live tinkering IDE.


You can feel the paranoia of the author stating 20000 times it's his opinion, giving context, etc just cause he knows the langstans reaction.


I respectfully disagree with the author's title choice.

My first impression is, of course, that the issue is there is no production Game or GUI framework around.

The author seems to complain mainly about the choices of frameworks and how bad or opinionated they are. I agree. Even Egui is too opinionated, but it makes sense on some level.

It is no problem to use bindings to some software written in C++. Rust was created to solve this exact problem: rewrite big projects that were written in C++, by slow mutation in Rust.

Honestly, I would add further that until the Unreal Engine uses Rust, we should not expect widespread Rust adoption. It will likely start with a company creating its own really custom game engine, the game becoming a bestseller, and it will spread iteratively over the years from there. Or maybe there will be a better option beyond Rust at that point.

This is the status quo: https://www.youtube.com/shorts/_zwKHgtQpc8 Let us be realistic.

Beyond that: One should see Rust as writing C with someone watching over you to remind you that you need to know the writer for each memory value. It picks up work off you. Or it should. If it doesn't, yes that is a problem, and we/you are doing it wrong.

But yes, if you are doing something that the borrow checker complains about, in other languages, either that semantic difference would have been hidden, or you would be paying for it later.

There, the author makes a point that he wants the code to work now. That is possible, and you can hotwire bad code in Rust, too. But I am sure that code is why we end up with games like Jedi Survivor.

There is no fundamental inability of Rust to do the things the author demands. If you want dynamic loading, use https://crates.io/crates/libloading (And you don't need to use the library). Do you want a global state? I will disagree with you, but take a look at, e.g., how the Dioxus project is doing it. Again I think that is always a terrible mistake, and people are thinking really of using an arena or a registry really.


As someone who's become a core contributor to Bevy lately, while also doing contract work in Unity on the side, I obviously disagree with the idea that Rust isn't up to the task of game dev. The grass isn't greener on the Unity side, with a mountain of technical debt holding the engine back. (They're still using Boehm GC in 2024!) Bevy is a breath of fresh air just because it's relatively new and free of legacy. Using Rust instead of C++ is just one part of that. Bevy has a more modern design throughout: for instance, it has a relatively straightforward path to GPU-driven rendering in an integrated system, without having to deal with three incompatible render pipelines (BiRP, HDRP, URP).

What I find more interesting is the parts of the article that boil down to "Rust isn't the best language for rapid development and iteration speed". And that may well be true! I've long thought that the future of Bevy is an integrated Lua scripting layer [1]. You don't even need to get into arguments about the suitability of the borrow checker: it's clear that artists and designers aren't going to be learning Rust anytime soon. I'd like to see a world in which Rust is there for the low-to-mid-level parts that need performance and reliability, and Lua is there for the high-level logic that needs fast iteration, and it's all a nicely integrated whole.

Long-term, I think this world would actually put Bevy in a better place than the existing engines. Unity forces you into C# for basically everything, which is both too low-level for non-programmers to use and too high-level for performance-critical code (unless you have a source license, which no indie developer has). Unreal requires C++, which is even more difficult than Rust (IMO), or Blueprints, which as a visual programming language is way too high-level for anything but the simplest logic. Godot favors GDScript, which is idiosyncratic for questionable gain. I think Rust and Lua (or something similar) would put Bevy in a Goldilocks spot of having two languages that cover all the low-, mid-, and high-level needs well.

As for the other parts of the article, I disagree with the ECS criticism; ECS has some downsides, but the upsides outweigh the downsides in my view. I do agree that Bevy not having an official editor is an ongoing problem that desperately needs fixing. Personally, I would have prioritized the editor way higher earlier in Bevy's development. There is space_editor [2] now, which is something.

[1]: https://github.com/makspll/bevy_mod_scripting

[2]: https://github.com/rewin123/space_editor


As for ECS, it's not really about upsides or downsides. It's about the fact that Rust effectively forces You to use ECS everywhere, because 'normal' game objects interacting won't fly under borrow checker.

And no matter how many upsides ECS can have, being forced to use it everywhere, rather when You want to, is the painfully part.


When will an editor be added?

That’s what’s holding me back from jumping into Bevy.

I actually think Rust is really hard, but I also think it would be beneficial to my career.


Right now it's blocked on cart rewriting the scene format.


Would it be possible to create a generic way to script Bevy? I'm sure there are a lot of people who are going to want to use C#, C++, or something else. I could imagine running a runtime in-process and Bevy communicating with it over a socket.


I mean, if you agree that the ideal for bevy would be lua integration, you are kinda agree with the author that Rust itself is suboptimal (at least in layer of game scropting), don't You?


I think whether you prefer Rust or a scripting language like Lua for high-level game logic comes down to what your needs are and personal preference. There are reasonable arguments on both sides.


> There are reasonable arguments on both sides.

What are the reasonable arguments for using Rust[1] for game logic instead of a scripting language like Lua?

[1] Or C++, etc.


Rust, in my view, is easy to justify over C++: the Cargo ecosystem makes high-quality libraries accessible, you'll spend less time debugging crashes, the language is more modern so you don't have to deal with stuff like header files, etc.

Compared to a scripting language like Lua, the benefits of Rust are more situational. Rust code runs a lot faster, and it takes better advantage of parallelism. It also has no garbage collection overhead. Does that outweigh the downsides? It's entirely dependent on your game and which logic in particular you're talking about.


Gaming is C++ first and foremost. All other languages suck, except when used to script game engines (C# in Unity, etc.). There's no practical reason to choose Rust or anything else. I don't think Rust is particularly bad or good here. There's decades of work to catch up on. I don't see Rust becoming a truly great language for games unless it's blessed by Epic or Unity.


> There's decades of work to catch up on

First you gotta get OpenGL going, with its horrible stateful API, give up on it and go to DirectX9. Do a complete rewrite when DirectX10 comes out. Get your real-time lighting happening with shadow volumes, run into patent issues and get strong-armed into putting Creative sound into your game. Cycle between GLSL, HLSL, and Cg. End up switching to shadow mapping anyway. Drop Linux and Mac support. Start over with Vulkan/Metal.

I don't think Rust needs to relive most of that.


Exactly. Bevy has the advantage of being built using the "right way" from the start. This makes an enormous difference in the ease of hacking on rendering code.

Ironically, the main thing holding Bevy back is the bickering at the W3C. WebGPU is still not widely supported, so we have to support WebGL 2 (with reduced functionality in some cases), and that adds a lot of complexity.


> Bevy has the advantage of being built using the "right way" from the start

I think the article is precisely criticizing this type of comment... You make people believe that Bevy is some kind of safe bet for the future, that it took inspiration from the greatest to build even better foundations for a game engine... And it seems common in the Rust community: make audacious unverifiable claims to enroll other "believers". But it's easy to claim when the tool itself has a fraction of the functionalities of Unreal, Unity or Even Godot. Heck, last time I used it (about 2 years ago) there wasn't even any built-in physics stuff. You have to install plugins from every corner, some require old Bevy versions, some other require newer versions... It's seriously unusable to just "get things done"


I don't think anyone ever claimed that Bevy is finished.

As for audacious unverifiable claims: it's quite verifiable that Bevy never used DirectX 9, DirectX 10, shadow volumes, HLSL, Cg, Vulkan, or Metal (I'm not sure about OpenGL or GLSL; those were before I got involved with the project). It chose the better path (wgpu) early on. That's all I was saying.


It can't be "built the right way" if it isn't finished. We actually don't know if it can be built the right way until it is finished.


I have about a half dozen PRs open, plus more on the way, that will get Bevy's renderer up to rough feature parity with Godot (Godot will have some things Bevy doesn't, like SDFGI, and Bevy will have some things Godot doesn't, like deferred and GPU occlusion culling). I have no doubt in my mind that Bevy's design is solid, but if you're skeptical, we will find out very soon :)


I'm not a pro engine dev, but from what I've heard, the graphics backend is a minor part of what makes a good and fully-fledged game engine. I see it more as a strength than other engine can ship on older and more legacy hardware. Moreover wgpu is still in its infancy but evolving rapidly. We'll see in the future how bevy will develop, and I really hope it works out, because I liked some of the ergonomics of the ECS when I used it, but for now I am very doubtful


Well, there are two types of game engines from what I've seen:

- One where most things are done out of the box, and you can mostly GUI visual code to success - these are your Unreals, your Unitys and Godots.

- Second, where most things need to be built piecemeal, and many things are missing, and need to be built - Bevy falls into this category, along with stuff like PyGame and what not.

I mean, Bevy is a fine engine for small things, but I can't admit I've seen any indie game of more renown succeed with it. OTOH, doesn't mean you can't make it, but it's definitely more effort.


Well, I don't call the second ones "game engines", I call them game libraries, or game frameworks. RayLib also is a nice one. But they don't claim to be "engines" (or at least, they are mostly described with the aforementioned terms)


> and that adds a lot of complexity.

Well, the example code for a simple button referenced in the article has insane complexity that has nothing to do with WebGL though. It's precisely "the right way" that adds a lot of complexity.


> * I don't see Rust becoming a truly great language for games unless it's blessed by Epic or Unity.

Which will never happen


Has there been any progress towards shipping Rust on consoles? I know the specifics are all under NDA, but to my knowledge nobody has even hinted that they've done it yet, even among the studios which are openly using Rust for backend or tooling stuff (e.g. Embark and Treyarch).

OP only appears to release their games on PC so it's not a concern for them, but for the majority of developers not being able to fit into console toolchains would be an immediate dealbreaker. I have no first hand information but what I've heard from hanging around people who would know is that Sony insists that developers only use their official LLVM/Clang fork which is customised for their weird ABI.


Rust has had tier 3 support for the Switch since 1.64, I believe. Someone had actually done it even before that but IIRC NDAs mucked up any movement on making that public.


Yeah, that Switch support was added by the homebrew side of the fence though so I don't know if it's something that developers would be able to use in an officially licensed game. Nintendo might not care so much as long as the game works, as mentioned it's Sony in particular that I've heard is picky about which compilers their developers use.


Ah, good point - was context I didn't have.


I've been working in gamedev since 2007 and toyed with Rust since 2014. I simply don't think that majority of games, especially indie games, have performance requirements high enough to justify using anything other than a high-level, garbage collected language.

Of course, some titles like Factorio are outliers. But for majority of games time you would spend to work with manual memory management in C or borrow checker in Rust would better be spent on other things.


I'm a Rust enthusiast and really surprised to discover so many people are apparently trying make games in pure Rust.

I think Rust is an amazing language for building a generic game engine, but a pretty crappy one for actually implementing a finished game.

Isn't it basically standard practice to have an engine written in a systems-level language with generality, reliability and performance as the top objectives, and the game itself written in a scripting/interpreted language that allows very quick iteration?

And it can even in many cases be a pretty horrendous home-brewed language. Despite having people without the computer-science knowledge to create a good language (and yes, inventing a new language is one of these things that leads to horrible results if you just learn on the job). This structure will still get you there easier and faster than trying to implement the whole game in a systems language.


I've experienced a lot of these concerns while building https://github.com/MeoMix/symbiants

I have a simple question that maybe someone smarter than me can answer confidently:

If I want to build something akin to Dwarf Fortress (in terms of simulation complexity) as a browser-first experience - what stack should I choose?

Originally, I prototyped something out using React, PixiJS, and ReactPixi (https://github.com/MeoMix/antfarm). The two main issues I ran into were the performance of React's reconciler processing tens of thousands of entities when most weren't changing (despite heavy memoization) and GC lurching due to excess object allocations. My takeaway was that if I wanted to continue writing in JS/TS that I would need to write non-idiomatic code to avoid excess allocations and abandon React. This approach would result in me effectively creating my own engine to manage state.

I decided to not go that direction. I chose Rust because no GC is a language feature (especially good since GCs in WASM are heavy) and I chose Bevy because it seemed like a fairly structured way to mutate a large amount of code.

Progress has been slow for a lot of the reasons listed in this article. I've written a lot of this off to WASM being a new frontier for game dev, and I'm new to Rust/Bevy/ECS/gamedev, and rationalized my effort by noting there's not a lot of complex simulation games running in browser (that I'm aware of).

It's not clear to me that I've made the right decision, and just need to take the good with the bad, in order to develop the type of game I want in the type of environment I want.


I personally would use C++ with SDL2, but I use c++ a lot already so I'm biased. Emscripten would allow you to target WASM.


If you're targeting the browser first why not use a browser first library like PhaserJS [0]?. I don't see a reason to work around with WASM; HTML5 canvas might be everything that you need.

[0] https://phaser.io/


I'm aware of Phaser and evaluated it, but didn't try prototyping something out using it.

My primary concerns were: lack of any coherent plan towards supporting WebGPU, TS bindings being best effort rather than being written natively in TS, and, crucially, Phaser4 being stuck in development hell.

Phaser 4 was announced in 2019, https://www.patreon.com/posts/28467752, and hasn't shipped. Current version on GitHub is v3.8. It made me deeply uncomfortable planning to build ontop of an engine that's stalled out for 5 years. I would not reasonably expect to be given support for WebGPU ever and I strongly feel that WebGPU is going to be the defining way web games are written in the coming year(s).

I also wasn't able to find any super compelling games written using Phaser. Since I evaluated it, it appears that Vampire Survivors was written using it, but then they abandoned Phaser and adopted Unity in v1.6.


>If I want to build something akin to Dwarf Fortress (in terms of simulation complexity) as a browser-first experience - what stack should I choose?

I'd suggest Haxe with OpenFL or HaxeFlixel.


> because the thing you might need to do is not available in the place where you're doing the thing

I noticed this with Rust. That sometimes Rust forces you to pull some things up the call stack in order to access them. Even if the semantics of what you do is the same Rust doesn't let you have things in arbitrary places.

It's super weird and possibly annoying when you hit it for the first time but if you stop and think about it, the place where Rust forces you to put it is a really good place from architectural standpoint.

It basically prevents you from taking parts of a thing and delegating responsibility for them to some children, which seems restrictive, but it provides you with consistent structure of where to look for things that are responsible for something.

Rust is restrictive in so many subtle ways (and some obvious ones) but I haven't seen one where it leads to worse outcomes. Maybe I have too little experience.


There are vanishingly few reasons why anyone wouldn't use a garbage collected language in modern software. I think a lot of the people using rust haven't realised how much this limits its utility. As a result of this, the language has been presented as much more widely applicable than it actually is.


100% agree. I am amazed how Rust is constantly touted as a general purpose programming language. In theory that's true, but in practice it only makes sense for projects where you would otherwise reach for C++.


Tried Rust for a very simple game, it didn't just feel right. It's like the language itself is begging you not to use it for game dev. Lol


Do they say what they replaced Rust with? I scanned through a few times and couldn't figure it out.


I assume from comments like "This is actually the #1 reason we're moving back to Unity" they're back in C# using unity


They mention that they're moving (back) to Unity/C#.


Ah, missed that. Thanks!


I'd be interested to hear the author's take on Nim [1], which seems to be better suited for game development than Rust by staying out of the dev's way [2], and supports hot-reloading (at least in Unreal Engine 5) [3]?

[1] https://nim-lang.org/

[2] https://youtu.be/d2VRuZo2pdA?si=E3N62oUJ-clXozCg

[3] https://www.youtube.com/watch?v=Cdr4-cOsAWA


I decided that Rust wasn't for me after a week long side project. But I doubted myself for a long time, as Rust seemed like such a great idea for so many reasons and it seemed like a bunch of other people were using it successfully. So I'm glad to see this article and know that it wasn't just me.


> ECS in Rust has this tendency of turning from something that is considered a tool in other language to become almost a religious belief.

I think bevy ui is the best example to give, it's like nobody ever did a ui framework with a pure ECS before. You can conclude either that's because it makes no sense to do that or that's because nobody has ever came up with the right way to do that. The bevy community thinks it's the latter.

It's especially concerning because they constantly talk about the editor, even though they don't even have any of the fundamental pieces for a gui framework in place. In bevy ui there is no way to create a reusable ui component, there are ways to do it but they all suck. So it's not even a matter of a lack of widgets or something, the problem is you can't even write a reusable widget, there is no foundation for a ui framework, and there isn't even a real plan to change it because nobody knows how to write a gui framework with a pure ECS.

But even if they figured that out, things like text input fields can't be properly implemented because there is no proper text rendering engine in bevy so they have to rewrite that first. Except that all rust text render/layout solutions in rust (it has to be pure rust because wasm/because it's rust) are still very experimental and immature.

It's a huge pain in the ass, people write games in rust because they want to write rust, not because they want to write games.


The Rust community is one of the top arguments against rust.

I've never before been so condescended to as when attempting to ask questions there. Their lack of care for perf also drives me up a wall. Anytime they propose adding an extra layer of indirection to get around the borrow checker, I have to explain yet again that with the way modern CPUs work, extra layers of indirection have serious cache-related perf costs. Then I get told that I am yet again doing it wrong, computers are fast enough, and I am worrying about the wrong thing.


> when attempting to ask questions there

Could you please give an indication of which venues you've encountered this kind of condescension in? I don't tend to see this in the spaces I frequent, but I know it is happening, and I wish it weren't. We try, sometimes, to provide some official messaging discouraging this kind of condescension, but perhaps there's something more we can do.


I don't do this usually because it's not constructive and incites flamewars, but since it was asked, this is whats 5 minutes of browsing comments got me.

1) https://news.ycombinator.com/item?id=40175427

Implies OP doesn't care about programming:

> The vibe that I'm getting is that it's filled with people that don't particularly care about programming, they just want to get stuff done(TM), this is also highlighted by the fact that they are willing to write completely inadequate code just to see things working. Rust is not that, and that's a good thing.

2) https://news.ycombinator.com/item?id=40173609

Handwaves Rust's complicated and onging async history and pinpoints the problem to "this guy":

> Why is async such a dealbreaker for this guy? Especially for web dev.

3) https://news.ycombinator.com/item?id=40172636

Another "you're holding it wrong":

> That has got to be the most "I didn't think this through" take ever.

4) https://news.ycombinator.com/item?id=40172605

As a newcomer, not thinking the Rust way is a sin and if you ask dumb questions you deserve retaliation, apparenlty:

> "On the other hand, Rust communities are inundated with people trying to write Rust as if it was their old favorite language..."

> "..In my experience, Rust community members who arrive with well thought out complaints or suggestions are welcomed by the people who like working on programming language fundamentals."

5) https://news.ycombinator.com/item?id=40172883

More handwaving of Rusts shortcomings. Plus contradiction on the same sentence:

> "Rust is not antithetical to iteration-based programming, it just makes you write a lot of heavy boilerplate to explicitly support that kind of style."


In my view, here's an example from this HN comment section: https://news.ycombinator.com/item?id=40177534

If this is the kind of attitude I would get from using Rust and having to be part of the Rust community, then I think I will pass.


I think I found it:

>> Their lack of care for perf

>> I have to explain yet again


My understanding is that he rather asked for specific examples (maybe post on reddit, rustlang forum), maybe I am wrong. I think giving actual examples would be more beneficial so we see the actual context clearly.


Many of us have probably heard that all the dumb programmers use Javascript, but even if that's true, that's a reason for me to use it. Even though I'm capable of using galaxy-brained language ecosystems, I don't need that mental overhead. And if it'll be cheaper to hire JS programmers onto the team, great.


Just waiting for someone to write a 5000 word essay on why they are moving from Rust to do data science stuff. Totally puzzled by everyone trying to get on to the Rust bandwagon on DS/DE, when being able to iterate and make changes fast is why Python rules even though it is dog slow.


I'd expect a move from Rust to Julia over a move to Python.


Why do you think it is dog slow? Most of the ds/de libraries are in C/C++, with python just calling. I personally checked JSON parsing with Go/Rust/Python. Guess who won.


So, python is not slow if you don't use python and depend on C/C++. Got it. LOL.


I would love to be able to bypass the orphan rule for internal crates.


I would love to make it possible to bypass the orphan rule in general, including for crates published on crates.io. This is an important issue for ecosystem scaling, in many different ways.

It means that if you have a library A providing a trait and a library B providing a type, either A has to add optional support for B or B has to add optional support for A, or someone has to hack around that with a newtype wrapper. Usually, whichever library is less popular ends up adding optional support for the more popular library. This is, for instance, one reason why it's really really hard to write a replacement for serde: you'd have to get every crate currently providing optional serde support to provide optional support for your library as well.

In other ecosystems, you'd either add quick-and-dirty support in your application, or you'd write (and perhaps publish) an A-B crate that implements support for using A and B together. This should be possible in Rust.


Oh, 100%, I'd be happy with that too.

Is the orphan rule a result of some technical limitation? Or just the idea that it's "unclean" to implement someone elses traits for someone elses types?


Hi from Haskell land!

Haskell went through this as well. Orphans used to be allowed and I certainly saw their appeal.

The problem is that the compiler might see two different implementations of ToString for MyType in different source files. The compiler could probably make a check for that if it were compiling both files at once, but if you want to be able to compile source files separately and only recompile files which have changed, etc., I think it gets harder to spot.

> someone has to hack around that with a newtype wrapper

Don't think of it as hacking around it. It's the blessed approach. Newtype wrapping is giving a proper names to the behaviours, so that they don't get mixed up.


Unity is so good and quite affordable, basically there’s zero upfront risk of using it. Similar for Unreal Engine. And then there are tons of other open-source engines like Godot that are also quite good.

Rust is great from lots of stuff but game development or building UIs isn’t among that (yet).


> Unity is so good and quite affordable, basically there’s zero upfront risk of using it.

Other than the absurd license changing shenanigans they tried to shove through recently. Hopefully they learned their lesson.


Unity learned that they have to turn the temperature up slower. They kept the new license that everyone was mad about but just made it so you can keep your current license if it's cheaper for you. No doubt they'll be tweaking these values over time.


It was changed to being able to keep the old license if you don't upgrade to Unity 6 or beyond. They added 2.5% revenue share as an option to the flat runtime fee to make sure you can't end up in a situation where you are losing money per user just from the runtime fee alone. Unity by default charges for whichever ends up cheaper.

https://unity.com/pricing-updates


Yeah, that's definitely a concern I have. Well, projected concern I guess, I'm using Godot myself rather than Unity, mostly because I found Unity way more confusing when I tried to learn it. But not needing to worry about licensing with Godot is certainly a nice bonus.


To be fair, building UIs with iced-rs is getting better by the minute. My favorite showcase for using that library is this IRC client called halloy: https://github.com/squidowl/halloy


I wonder if Rust is just much better suited for making game engines than it is for making games.

The relative abundance of game engines made in Rust compared to actual games is a bit of a meme, but I think there's something to it. Maybe Rust's feature set is just not the best fit for gamedev, for reasons outlined in TFA. Maybe it means that game engines built in Rust (which I do feel Rust is well suited for) should try to integrate an interpreter for some higher level language, IDK.


Absolutely agree with the comments on ECS and Bevy in particular. I tried getting to grips with it for some time, doing things the Bevy way, and it just felt like a big step backwards because it’s not suitable for most things. The renderer was really slow at the time too, although I imagine that has improved. Switched to plain rust + vulkan (via ash) + dear imgui and haven’t looked back.


Some of those issues with Bevy might have more to do with its immaturity. It still needs at least a few years to be a solid choice for all sorts of games, in my opinion. I do think the hype should be toned down; people shouldn't feel pressured to worship Bevy or Rust or whatever is the hot new thing.


> Making a fun & interesting games is about rapid prototyping and iteration, Rust's values are everything but that

I feel like this is the core of the author's frustration.

Rust is a systems language. It's for writing tight fast C-like code but safely and with a much more powerful type system.

The facilities you need to do this are somewhat at odds with what you want for rapid iteration.

Seems like Rust was the wrong tool for the job.


> Seems like Rust was the wrong tool for the job.

Sure, but the problem is when you have a community around that tool that insists otherwise.


Can you please give an example of that? Rust is very much advertised as a systems programming language. Can't really blame people for using it and going "oh it's harder than go and python"...



From the website:

> From startups to large corporations, from embedded devices to scalable web services, Rust is a great fit.

Are these "systems"? FWIW "systems" does not appear on the Rust home page.

From wikipedia:

> Rust is a multi-paradigm, general-purpose programming language that emphasizes performance, type safety, and concurrency.


>performance, type safety, and concurrency

So... a systems programming language? x)


If all software programs are _systems_, yes :)


I would think a modern systems language such as Rust wouldn't require a systems development methodology from the 1970's. There's no inherent reason you shouldn't be able to rapidly prototype a system and then refactor the prototype into your actual implementation.


Exactly!


C/C++ -> move fast and break things

Rust -> move slowly and don't break anything


If they tried to use C++ it would have ended absolutely the same way.

As someone who works in gamedev I can assure you C++ is same bad choice for 2 man indie project. In search of fast iteration times games have moved away from writing code in low level languages. Hardware for casual games is much bigger and faster than anything small team is able to make.


Yes, and if you want move fast and don't break things you need a higher level language like Go, Java, C#, etc.

I'm pretty fast in Rust but not as fast as Go, mostly because Rust's type system and borrow semantics come with a higher cognitive load.

I find both to be faster than C++. Rust is faster because I have to worry less about blowing my feet off with memory errors. I can't think of anything to recommend C++ now that Rust exists.


In my experience, Rust -> move faster than C/C++ because you don't have to keep fixing things.


Yeah, I rather like the pattern of embedding a more rapid iteration scripting language on top for game stuff like Lua or stripped down Python variants, using Rust as the core and hot loop code. That said, there is a lot of folks wanting to do it all in Rust, and the article touches pretty well on that not being the pleasant case for a lot of valid gamedev approaches.


Lua is often used for this in gaming.


Yup! I found Lua pretty lovely to use with Rust in my case.


This article describes almost exactly why I think gradual typing is actually a good thing. Type checkers shouldn't get in the way of your code compiling. Yes, the language has to be designed with this property from the beginning. Yes, you should always enforce complete checking in CI. But you should also be able to try half-baked ideas.


There are at least a few nascent statically typed languages (as in, full static typing rather than gradual) which nevertheless let code with type errors compile for the sake of testing.

The two that I know of are Darklang [0] and Roc [1] which aim to let you compile code with type errors for the same reason you suggest.

[0] "Dark is designed for continuous delivery. As such, we don’t like requiring you to make large scale changes across your program, like changing a type everywhere. Instead, we want you to quickly discover that bad ideas won’t work, without first requiring you to propagate the type changes throughout your program."

https://blog.darklang.com/real-problems-with-functional-lang...

[1] "If you like, you can run a program that has compile-time errors like this. (If the program reaches the error at runtime, it will crash.)"

https://www.roc-lang.org/friendly


Let me introduce you to `-fdefer-type-errors` in GHC Haskell:

https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/defe...


That's pretty cool! Wouldn't have guessed that Haskell would has had this feature since 2015.


> Type checkers shouldn't get in the way of your code compiling.

I don't get it, what's the point of type checking if not to reject invalid programs? The point of a type system isn't simply to add annotations to a programmer (and some type systems can omit them entirely) but to define the subset of programs that are correct within the set of all the programs that can be expressed.

I understand (and have used in production) optionally/gradually typed languages, and without fail codebases will opt for using types up front and not ignoring type check failure because they are always incorrect.

A type error is the compiler/run time telling you "I don't understand what you told me" so why do you want to ignore that?

And if the point is that you want to be able to change the type signature of something without having to refactor giant chunks of code, then that suggests your code is structured poorly to begin with. It should be easy to pull in what you need and play with it without affecting everything else if you haven't done a bad job of architecting the codebase.


Because often, and especially with heavily interactive programs like games or UIs (any web page), you don't know if something will be good or not until you build some working version of it. The more barriers there are (type checkers, compiler errors, etc), the longer it will take you to prototype something usable to check if what you're building is good.

Sometimes, it's useful to bypass these things for a prototype and you don't care if it crashes on any edge case. This is why typescript is so popular on the web - you can quickly prototype something with JS, then once you find the right solution, add types and productionize the code


I agree with this, but I don't think dynamic types is the only solution. Something like Roc[0] strikes a better balance: it gives you a flag for development, and when enabled, all compilation errors become warnings. The compiler substitutes every function it couldn't compile with one that panics at runtime.

[0] https://www.roc-lang.org/


But a type error isn't an edge case! It means you've written something the compiler can't understand.

> This is why typescript is so popular on the web - you can quickly prototype something with JS, then once you find the right solution, add types and productionize the code

I think you've got it backwards - TS is popular because people want to use types up front, but it has to work with JS, which is so dynamic that it's impossible to write a sound type system that can even touch it.

I don't want to reply "git gud" but I really am struggling to understand how people are writing code that where it is so difficult to use type information or where changing it is so cumbersome that you think it's a barrier. And I don't see many games or performant GUIs written in dynamically typed languages, particularly outside the web. Even in very dynamic languages like Objective C, things are still well typed.


> But a type error isn't an edge case! It means you've written something the compiler can't understand.

I get it! But the compiler here is getting in the way, so Rust is the wrong tool choice here, or for anything that requires quick prototyping (like the OP said)

> writing code that where it is so difficult to use type information

It's not difficult, it just takes longer. Typing up front works well when you know exactly what you're building. When you don't, or are doing experimental design, they just get in the way until you've settled on a direction. This direction is not due to technical constraints or language choice, it's simply that designing complex user interactions is hard and you don't know it's correct until you have users use it

> dynamically typed languages

Because the cost of prototyping in a dynamic lang and then fully rewriting in a performant language is higher than having slower iteration speeds in typed languages. But also, this is why a large number of new non-web GUIs use electron (or are mac exclusive, which offers other benefits for GUI development)

> I think you've got it backwards - TS is popular because people want to use types up front,

Personally, I mix it up. There are things I know what their types will be no matter what so i add them up front. Then there are other things which I do not know, and I add those types at the end once i'm performing the final cleanup before code review

The nicest part is, TS runs regardless. I often refactor my types frequently since I get them wrong a lot (especially the up front ones) and have TS in a failing state while developing, but the TS compiler doesn't get in my way, and still works for the types that I expect to have working


A type error means a type error, and nothing more. Type errors do not mean "incorrectness." Here is an example of something you can't do in rust, but can do in other statically typed languages.

```rust

struct Type1 { id: u32 }

struct Type2 { id: u32 }

fn main() {

    let obj1 = Type1 {

        id: 1

    };

    let obj2: Type2 = obj1; // compile error, Type1 is not Type2
}

```

This code works perfectly fine. These two structs have the same signature. The only thing that doesn't work is the names. There are dozens of things like this where your code works but the compiler is too strict. This is terrible for rapid iteration. It's good for other things like modeling your domain with types.


[text deleted because it was based on a incorrecf recollection of the details of Rust features.]


This was just one example of type error != logic error.

> one is for things that are structurally identitical but semantically distinct where confusing them creates a semantic error, the other is for alternate (usually, more concise) names for an existing type

This is not always true. Sometimes there are cases where you have different type aliases and want to use them interchangeably. Now you have to refactor them to inherit from some general type that didn't exist before. I also could count on one hand the amount of times this has caught a genuine bug even with diligent domain modeling (Which I would argue is overhyped).


Exactly this. Typing in the final version is great. Typing up front is almost always getting in the way.

Most interesting programs involve a lot of figuring things out as you go, and so any "tax" on that process is one you hope to avoid. The last thing you want is for the language itself to be the source of that tax any more than it has to be.


>I don't get it, what's the point of type checking if not to reject invalid programs

It can take longer to think about how to properly type instead of just writing some code, testing it out, and immediately seeing if something is wrong. You also often get into a situation where linters who like to act like type-systems give you arcane errors (looking at you TypeScript).

In the moment I just want to move some data around and quickly try out an idea, I don't need it to be resilient or perfect.


>I don't get it, what's the point of type checking if not to reject invalid programs?

Because not every program that doesn't compile is necessarily invalid.


> Because not every program that doesn't compile is necessarily invalid.

I think that more programmers should be aware of this point. Rust doesn't reject code that will crash, it rejects code that it cannot prove won't crash.

The code being rejected might be just fine (as numerous examples in the article showed).


>And if the point is that you want to be able to change the type signature of something without having to refactor giant chunks of code, then that suggests your code is structured poorly to begin with.

The article addresses this, multiple times. In brief, the "poor structure" isn't the problem.


It's also pointless to care about types in fine-scoped code. Like if your language forces me to assert that i is an integer like this, something is wrong...

  for (int i = 0; i < n; i++) 
And when I'm writing backends, usually there will be no types anywhere except at the API layer, i.e. OpenAPI spec or protocol buffers. Not once have I encountered a bug related to accidental wrong types that wasn't caught at this outermost layer.


> being unable to just move on for now and solve my problem and fix it later

Same thing about Golang "unused variable" and "unused import". So many times I just exploring a lib and trying things out with no intention to leave it as is, but no, Go forces to "write good code".


I mean, I'm no fan of Golang (I actually kinda hate it), but this is easily solved with a blank identifier (underscore `_`) or by commenting the line. Both of which makes it blindingly obvious when doing code review.

Golang falls in the camp of enabling fast iteration while also enforcing some sane basics. Letting off an unused variable/import with a warning is a recipe for insanity, anyone who has opened a badly maintained Java codebase will tell you this.


Great article, but on ECS I thought the primary point of using it was locality of memory so that you don't get cache misses during iteration of elements. Yes you are preferring composition over inheritance but I thought that was more of a side-benefit to the main thing ECS was trying to solve.


I use Nim at work. It is a joy. I replaced a prototype Rust application which was confirmedly not a joy. None of Rust's highly opinionated safety semantics necessarily imply a better end product, and often make delivering an end product much more difficult. Rust has use cases, but it is a specialized hammer for a specific domain of nails. It is not the cure-all everyone wants it to be.

If you're asking if you should use Rust and you don't have a highly specific embedded use case, you should probably just use a language with decent RC or GC. As an additional bonus, 99.9% of the code you write in a GC language never has to be about memory at all. Business logic, clean and bare.


> As an additional bonus, 99.9% of the code you write in a GC language never has to be about memory at all.

This is a bit of an oversimplification. If you are sloppy with "memory management" (unnecessary object lifetimes, unnecessary duplication, etc) even in a GC language, it is possible to have noticeable performance impacts. I'm not saying these languages aren't the right tool for the job, but it is not a free pass to ignore hardware limitations.


Yes, but that's only true for long-running applications. Most gigantic monolithic web server applications, for example, do not need to be gigantic monolithic web server applications.

My Nim CLI application does a subset of common tasks, outputs to stdout, and is invoked by our PHP server for each task. It (and the Rust and C programs it replaced) can barely blip a fraction of a percent amount of system memory in the worst case. Why did it need to be in C or Rust? It was never necessary, and I've been able to load it with features they never had but always wanted, in astoundingly short order.

Architecting these sorts of services properly can assist in making tool choices that drastically boost maintainability and feature delivery, especially for small teams; meanwhile, entire classes of problems melt away.


> Architecting these sorts of services properly can assist in making tool choices that drastically boost maintainability and feature delivery, especially for small teams

I hate hearing this argument, because it glosses over the hard realities of building products, even if it sounds good on paper. Companies don't often have the resources to hire people that can do this early on, or the time to do this early on even if they do have the talent. And it might not be correct to do so when you have uncertain future requirements.

If that talent is you, you might not even be trusted to make those kinds of decisions until a year or two of tenure. And you might have junior (or even 'senior') devs on the side making changes that make this harder to do, without anyone realizing it.

So the reality is that you can't expect systems to be "architected properly" and you get what you get. It's an incremental process.


I have a personal experience that is literally the polar opposite of everything you just described. Atypical, yes, but on the other hand, testimonials about life in most other software teams describe an unnecessary mess of middle-management and bureaucratic cruft. I'm sure at many companies there are many roadblocks in the way between their developers and a better product, but my company doesn't have them. Not a startup, either. A 50-year-old engineering company that learned hard lessons after 20 years of stagnation.

I'm not going to be very sympathetic to counter-arguments about what "proper" practices we "should" follow, because quite frankly we're delivering features and value for our clients and catapulting back to the leadership position in our industry. Programming as a studied profession is barely half a century old, and the current zeitgeist of "best practice" is younger than some peoples' stints at the company they still work at. I'm very inclined to believe none of it is correct. No structural or architectural approach is one-size-fits-all and everything will be contingent on who you have and what you're trying to deliver.


Sure, its possible to be at one of those few great companies that are set apart from the others. If your company has a blog or videos that can share any of those insights I'd definitely be interested in learning more about how it was done.


Oh we’re at that point of the hype cycle are we?


We've always been here. People just didn't listen.


894 comments here. similar in several posts on reddit. and only 1 review of his game on steam. shit's brutal.


Thank you for this. I have been having some engine indecision. I am more of a programmer, so I was thinking that learning rust and bevy would be superior to learning godot, but it seemed like a really deep rabbit hole to go down, and I was concerned from the lack of actual games that were coming out of it.

Your blog post really explained the reason why quite clearly, thus eliminating a journey that would be fraught with difficulty and seemingly with little return.

I am going to go with godot and hopefully that works out for me. Thank you again for sharing your experience.


I don't use Rust for game dev but I do for low level libraries and find it easier than C++ to get started. I have enjoyed it more than Java and like it for different reasons than Go, but it feels good to program in.

As for the design patterns that a complex game requires, if you are considering Rust for game dev and ecs design patterns it might be useful to check out projects that are Rust centric like https://spacetimedb.com/.


Clearly he needs a framework for making games in Rust instead of iterating through rust primitives.


> games are single threaded.

Huh. Not games like Cyberpunk 2077 and it's good that they are not.


I had this same pain point though when using Rust that was meant to run in the browser (compiles to WASM), i.e. guaranteed to be single-threaded. Still had to pay the very high price of accessing global state in Rust. Ended up moving that code to JS instead just to avoid that!


Did you read the part where they said that they're a two-person indie game studio with a development cycle of 3-12 months max?


That's not an argument against parallelism in game design in general.


Almost all parallelism in game engines is for very specific parts of the engine and almost none of the gameplay stuff is paralellizable. What people who haven't actually had to go through and solve the problems presented in game engines often times misunderstand is that when your game is running poorly because everything is happening on a single thread, almost all of this speed issue is because of rendering. Then physics. These are very hard problems to solve and it's more complicated than "use an ECS" to solve them.


> almost none of the gameplay stuff is paralellizable

Define gameplay. If you have some simulation happening as part of the gameplay, parallelizing it can be quite useful vs killing one CPU core on it. Physics is just one common example, but not everything is about physics. You can simulate whatever.


No, but in practice.

Because the type of game you produce in that time frame isn't typically the one that needs to worry about parallelizing multiple compute units.


Thinking parallel is heavy mental load to carry even if rust gives you the tools to make the load lighter.

If one or two threads(game loop & rendering threads) is enough why increase the difficulty by going even more concurrent, for little to no extra payout.


As someone in a similar position (also ~3 years doing gamedev in Rust but only free time + open source), I feel very similarly.

There's low hanging fruit i've been complaining about for years where Rust is protecting us from ourselves - orphan rules, global state, ... Look, we're adults, we can make decisions with tredeoffs.

Compile times are a tougher one, I understand that Rust does analysis that is more complex than many langs and i feel ungrateful to people who spend their free time improving Rust. But also i don't think the complexity justifies all of it. Make dynamic linking easier, reduce how much needs to be recompiled, compile generics to dynamic dispatch in debug builds, etc. - there's gotta be a ton of stuff that can investigated.

ECS just plain sucks. People use it because what they want at first is some way to have relationships between entities. References/pointers are the obvious solution in most langs but in rust, they're obviously out. The second option is Vec and indices but that falls apart as soon as you start removing entities. The next step up the ladder of complexity should be generational arenas but for some reason people immediately reach for the big guns - ECS. And it infests their game with two things that make gamedev a slog - dynamic typing and boilerplate.

Boilerplate is obvious to anyone who has done gamedev the "obvious" way before. What could be projectile.shooter.score += 1 is multiple lines which (depending on your particular choice of ECS) usually involve generics. You shift your focus from tweaking your game logic and tuning the experience to typing out boiletplate.

Dynamic typing means entities are no longer structs with fields where you can understand how they relate to each other at a glance but instead any component can be anywhere, entities are no longer real, refactoring always causes silent bugs.

However, by far the biggest issue is the community's handling of criticism.

There are practically no experienced gamedevs coming to Rust from other langs so there's nobody to give Rustaceans a reality check. Rust gamedevs are almost always writing their first game (or, yes, engine). And there's nothing wrong with that, i was writing my first game at one point too. But their attitude is that they chose Rust because they heard it's the best and they got invested in the language because it's hard(er) to learn and now with all this investment if they hear rust or their particular favorite engine might not be that great, it feels like wasted effort so they get emotional and defensive.

I've personally chatted with over half a dozen other gamedevs who came to rust with years of experience under their belt and a common pattern is that they avoid the rust (gamedev) community because they're beat down by the negativity heaped upon them every time they try to discuss the negatives. It doesn't matter they take every effort to offer constructive criticism, it becomes a social instead of technical topic.

I came to Rust because i care about code correctness and, well, quality (of tooling, docs, testing). And Rust delivers there on a lot of that. But i also wanted to write games of a larger scale than can be done in one person. My hope was that there'd be other people with the same values who wanna build cool games together. Instead there's a low single digit number of serious open source projects and a bunch of small one man games and a a whole lot of loud people who seem to think gamedev is about hyping up an engine like it's a sports team.

Myself, I apparently chose the wrong engines for my games in both cases. Not because they're bad technically. In fact, having 5 years of gamedev experience before Rust, i think my choices are better from a technical perspective but there's just not the critical mass to build a serious open source game around them.


This article hits every note of frustration I've gotten with rust.

It honestly feels that if you want a somewhat memory safe language for more general purpose use cases good'ol fashion Ada or maybe ziggs (or if carbon ever become a thing) seem 100% more approachable than rust for gamedev or gui.

The only way I see rust becoming dominant in gamedev/ui is by sheer brute force.


I've always been fascinated by games and I've always loved programming but except for the beginning I've never combined the two. And the issue with game dev and functional languages not seeming to jive with each other always seemed (from the outside) to mostly be one of established norms conflicting with each other on both sides. I'd love to have time to explore this; for example one complaint the author makes is about passing around the game state but then also needing to pass around sub-parts of it and Rust complaining; this would be trivial or a non-problem in Elixir, but I know that's because there's no mutable state and in Rust's case it must deal with mutable state regions because all of game dev assumes that's available (or it must be by necessity for performance reasons).


I am happy to be honest that I have not spent much time of rust, other than browsing sample code and reading about user experience, the borrow checker, etc.

For me, I could not see myself using rust especially for game development. Some of the points raised are my concerns, especially wanting to chuck something together (to improve later) only to fight the compiler, etc.

I would be interested to know what language they choose moving forward. It seems the contenders are likely to be:-

C - because, you can,

C++ - ditto,

D - I think it is largely ignored, and there is the betterC flag

Zig - Seems interesting,

Odin - Also interesting

Anyway... going to enjoy reading the comments, now.


I don't have nearly the same amount of experience with Rust (just a few months of hobby coding), but whenever people looked at me, all surprised, that I don't like Rust that much I always just said "Safety is not the most important thing for everything. For the stuff I am doing I'd rather be quick", but this article is the most thorough way of explaining that I have seen with lots of extra stuff I had no idea about before. Also the extra random gamedev links in the middle were great, but it took me well over a full, focused hour to read. It is thorough, but some more brevity might have helped, I think.


I see some of those things could perhaps be solved by implementing parts of the game in a sandbox. I know that's work: I'm doing it myself. But all the work with the host-guest boundary (let's not call it bindings, please), is worth it in the end.

I have a C++ game client, a C++ game server, and a shared C++ game script that is transferred to all clients, running in a RISC-V emulator. That means the script will fundamentally execute the same way on all clients, and the server. I have no idea what everyone else is doing. This is what I'm doing now, and the more fleshed out it's becoming, the more I actually like it this way. I don't think I could easily "go back" to other solutions.


Rust being the best alternative to C++ is why I'm wildly rooting for Mojo. This language sacrifices a lot in ergonomics and UX to the alter of safety. And the Rust community never fails to interject with "Well, Akshually..." whenever you complain.


Rust is good for software following a strict spec and design doc. A lot of what I do is exploratory in nature. Rust sucks there, fighting me. Best kind of languages in those cases are Lisp, Python and (unironically) C with Visual Studio debugger.


I'm amazed that we still willingly put heavy compute into compile steps

The trade off is always reload ability

We have CI, we have LSP places we can put heavy checks without sacrificing our ability to hot reload fast

In some checkers you can put in your own custom checks too


Like the author states, the "written in Rust" bonus many projects get does not apply to games. Most of the games I consider to be the most fun are built on an absolute rats nest dumpster fire of code (see: https://www.youtube.com/watch?v=k238XpMMn38). Sometimes bugs even expand game mechanics and make them more fun and expressive. That being said I'll definitely check out Unrelaxing Quacks, it looks great.


I tried to get Jonathan Blow engaged in the Rust RFC process to improve productivity for gamedevs. However, he thought it was a better idea to start working on his own language (Jai).

When I did some research for the Piston project, I learned that there was a productivity technique called "meta parsing" which was used in late 60s to develop the first modern computer. This was before C. The language was Tree-Meta. Viewpoint Research Institute upgraded it to OMeta.

I thought OMeta was too complex, so I developed Piston-Meta, an alternative for Rust using a simple data structure: Start node, end node, text, bool and f64.


My favorite thing about rust is when rust devs say that the slow compile things aren't a big deal and then show how that just by dropping a few dependencies you can get hello world down from 90 seconds to 30 seconds.


This author is done more serious Rust code than I have, but I wonder: why not just abuse `clone`, `unwrap`, `Arc`, or even `transmute`?

Rust does try to force you to refactor sometimes, but you have the option to fight back.


It's possible to abuse 'transmute' but .clone(), .unwrap(), Arc<> are full-blown language features, so using them is not 'abuse' of any kind. They're part of how Rust supports quick iterative development, along with Any (for dynamic objects with downcast) and still others.


They should open source their game engine. That way we can learn from their mistakes. How can we be sure this is a shortcoming of the language and not the APIs the author is working with?

Anyway - no one is going to ship a game written in Rust with this attitude. To the other folks out there happily writing their games in Rust - don't be distracted! All it takes is one success story to prove the concept :)


Interesting to me to have [iteration speed] <--> [maintainability] spelled out as opposite ends of a spectrum... and that sometimes [iteration speed] is the right thing to optimize for.


It's not news though? Favoring iteration speed at the cost of future maintainability is a common argument for dynamically typed languages.


Certainly--just having it spelled out like this was new to me?


Other languages allow much easier workarounds for immediate problems without necessarily sacrificing code quality.

I really wish the author had followed this with a list and an explanation.


All low-level languages (by which I mean languages that offer control over all/most memory allocation) inherently suffer from low abstraction, i.e. there are fewer implementations of a particular interface or, conversely, more changes to the implementation require changes to the interface itself or to its client [1]. This is why even though writing a program in many low-level languages can be not much more expensive than writing the program in a high-level language (one where memory management is entirely or largely automatic), costs accrue in maintenance.

Now, a language like Rust makes some aspects better because it ensures that the maintenance (refactoring) is done correctly -- reducing the risk of mistakes -- but it comes at a cost: you must explain your handling of memory (before and after the refactor) to the compiler (plus, the compiler doesn't understand all patterns). I think it's too soon to empirically compare this cost to the gain in reduced risk and determine when each option is more or less advantageous (and perhaps it is also a matter of personal programmer preference), the fact remains that maintenance of programs in all low-level languages is always more costly than maintenance of programs written in high-level languages because of the low abstraction inherent to all low-level languages.

When writing in a low-level language some may prefer the Rust approach while others may prefer less restrictive ones [2], but people choosing any low-level language should be aware of the added maintenance cost they're invariably signing up for. Sadly, this cost only becomes apparent at later stages of the project.

[1]: Some people claim that memory is just like any other resource (e.g. file descriptors), but this is incorrect. Memory and processing are fundamental to the very nature of abstract algorithms, and differences in how memory is handled change the available range of algorithms. E.g. finite state machines, queue automata, and Turing machines differ only in how memory is handled and accessed. In short -- memory and processing are special resources and are not the same as IO resources.

[2]: I'm personally not a big fan of Rust's approach -- and I particularly dislike C++'s and Rust's "zero-cost abstraction", which is the attempt to make the low abstraction ability invisible in the final code without changing its fundamental aspects -- but I recognise that people's opinions differ on this matter. I also reject the claim that there's no middle ground between Rust and C that offers an intermediate tradeoff between them, i.e. that there is no safety premium to a language that offers some of Rust's safety guarantees but not all of them, such as Zig, or offers better and effective assurance of some properties without a sound guarantee.


This is a fantastic article.

Personally, I found using Godot with some parts in Rust via gdext quite enjoyable.

You can avoid dealing with GDScript for important parts of the code and have access to OS threads if you want them, etc. - but can also prototype features in GDScript and write the UI, etc. there for fast testing, and keep a good separation of UI and graphics presentation vs. the actual game logic.


OP should give D a try, my game fully recompile in less than 1sec, and i can consume most of the C gamedev ecosystem without effort and seamlessly


Is rust becoming like react where there was a ton of hype around it and then years later it's falling by the wayside?


The thing that the Rust community thinks sets them apart (their community), is really the thing holding them back.


I don't think it holds them back. Rust is just not an amazing special tool for game dev it seems, doesn't matter how the community behaves. It's still a solid choice for systems.


Reading the part about global state reminds me of some of my thoughts building backend web apps. Not having global namespace pollution is fine, but not having any global registry of some sort can make things far harder for the size that most things are going to be.


I've been leaning into Rust almost purely to escape from the mess that is C/C++ tooling which always makes considering a new dependency a time sink.

Can someone explain the obsession with combining ECS with generational arenas?


Seems like a few contradictory ideas here.

Rust is supposed to be a better safer C/C++.

Then lot of comments here that games are best done in C++.

So why can't Rust be used for games?

What is really missing beyond an improved ecosystem of tools. All also built on Rust.


This is not a matter of tools though, did you read the article? The main pain point is that Rust semantically makes fast iteration impossible


Did I read the article? You?.

But the conversation in the thread is making point that Rust does lack tooling, libraries, that would make lacking fast iteration a valid trade off.

And a common theme is that C++ is better, and last I checked it is also not great at fast iteration.


That logo is huge on mobile so I can't read the first few bullet points.


Author here, sorry about that, I just deployed a fix, should be readable now. If it's not, here's the first few points

- Once you get good at Rust all of these problems will go away - Rust being great at big refactorings solves a largely self-inflicted issues with the borrow checker - Indirection only solves some problems, and always at the cost of dev ergonomics - ECS solves the wrong kind problem - Generalized systems don't lead to fun gameplay - Making a fun & interesting games is about rapid prototyping and iteration, Rust's values are everything but that - Procedural macros are not even "we have reflection at home" - ...

the list corresponds to the titles of sections in the article.


Can you not curate an opinionated subset of C that enforces all of Rust's rules so that you can have a safe variant of C, just by removing flexibility and enforcing some programming patterns?


Rust is a terrible language for everything except a few niche tools.


I also had the same bad experience with Rust outside of Gamedev. Probably a lot of other people too, but people don't talk about it much, because the Rust community is the most religious programming language community I've ever seen in my life. Before Rust, the Scala community was also pretty bizarre (Java too for a while), but nothing was on Rust's level. The worst part about Rust isn't technical, it's the crazy community. You can see in the article that the author tries to explain everything at every point, trying to escape the problem "If you don't understand something in Rust, you're holding Rust wrong."

Of course, there are many people in the Rust community who are not religious and try to improve the language, but my general feeling after reading a lot about Rust is to stay very far from the church of Rust.

The best response to this type of community is humor, like this video about a Rust Senior developer https://www.youtube.com/watch?v=TGfQu0bQTKc


I mean, your comment is actually contributing to the problem. You can 100% criticize the community - and thus push them to clean up that shit - without straying into characterizing it like that.

It's fanning the flames and just doesn't really help.


> You can 100% criticize the community - and thus push them to clean up that shit - without straying into characterizing it like that

Like what? To criticize a community, you have to characterize it. And of course they didn't say everyone was like this.

> Of course, there are many people in the Rust community who are not religious and try to improve the language


Ending with "The church of Rust" alone is farther than is necessary and not a helpful characterization. Programming/tech religious wars are a two way street, we don't need to push them along. ;P


I've been writing Rust professionally (and predominantly) for ~5+ years now, with brief detours in game dev. I'll defer to others who are dedicated game devs, but overall I think this article is well written and a healthy thing for Rust overall. We need this kind of breakdown if things are going to continue to improve.

I'd say I only have two somewhat arbitrary comments on this piece:

> Rust on the other hand often feels like when you talk to a teenager about their preference about anything. What comes out are often very strong opinions and not a lot of nuance.

This is a rabbit hole of a topic so I don't want to go too deep into it, but this isn't a Rust-specific issue (though it may be a current Rust issue). I've seen this same pattern play out across so many languages over the years, from Lisp to Rust to everything in between.

A very unscientific and definitely not charitable way I've thought about this over the years is that programming, by nature, is an OCD person's dream. We wind up with a pretty large chunk of people who seemingly move language to language in the hype cycles in search of some weird nirvana level that is likely just unobtainable. I feel like Rust has slowly started shedding this as the community has grown/matured and some people have moved on to the next hype cycle but I often find myself wishing it'd happen faster.

I write Rust because when I sleep at night, I just don't get woken up by being paged for nearly as many weird edge cases. The Rust I write often has a litany of compromises because I want to just get shit done and move on with my life, and the remaining guarantees are still good enough. The number of times I've had to tell people to leave it be is definitely higher than it should be.

> I know that just by saying "global state" I'm immediately triggering many people who have strong opinions on this being wrong. I feel this is one of those things where the Rust community has created a really harmful and unpractical rules to put on projects/people.

This isn't really a Rust-specific thing, though I can't fault the author for including it. People have been beating the drum of "no globals" for as long as I can remember... and simultaneously, as long as I can remember, game devs have come out of the woodwork to politely explain that the programming they do is often under very different constraints.

I still periodically use global state for things because it's just faster at points, and no, I've never cared if people get annoyed by it.

Anyway, here's to hoping this leads to positives for the community.


That's Rust for you - as much language-level derangement as people hated about 90s Java, but without the memory safety benefits of Java.


Wonder if Mojo will become a good option as it matures


> Secondly, procedural macros are incredibly difficult to write, and most people end up using very heavy helper crates

lisp lisp lisp lisp lisp lisp lisp lisp


Very interesting article but I feel the need to question some things.

- The very first thing that comes to mind is why actually use Rust for gamedev. From the article it seems like the author got into Rust through Godot, but that does not explain why commit to use Rust for a full game. What was the reasoning behind picking Rust?

- It feels to me that there's a mix of criticism to Rust as a language, Rust as a community and libraries/frameworks written in Rust (in particular Bevy). Personally I think these are completely separated matters so I would like to know why the author treats them all as a unit.

- I've always got the impression that gamedevs try to have their cake and eat it too which is almost always impossible: they want to have quick iterations and write "simple code" while having low level control of everything (ex. manual memory management, usage of pointers, etc.). For example, the author mentions wanting access to methods like "play_sound()" but at the same time mentions that some patterns are unacceptable given the "overhead [...] due to memory locality". I've never heard of an ecosystem where you can have everything without any compromises.

- In particular, I get the impression that the author has a lot of troubles with ECS and instead it tries to bend it to work in a OOP fashion (for example, through the usage of "fat components" as he calls them or preferring virtual dispatch over `match` statements). He claims that he has put in a "lot of time" in trying to make it work but I get the impression that this effort was mostly wasted in trying to bend the language and libraries into something that just won't work out. It's like trying to use a circular saw to polish mechanical watch pieces: an exercise in frustration. At some point in time I'm sure he asked himself why to keep on pushing on, and I would like to know why he continued to be committed to such process.

- The author claims multiple times that they work in a single threaded environment where they should not care about concurrent access so they should not pay the price in the type system. I agree that this should be the case but then it proceeds to list examples that show a different situation. One of them is the claim that they cannot use a "god" context to pass down every dependency due to the borrow checker, listing code that tries to hold a reference to a "camera" system while passing the context to the "player" system. In particular this does not make sense because: 1) If you're in a single threaded environment you don't have two systems using the same context at the same time (because there is no "at the same time"); 2) if the "player" system does not need the camera then it won't change it, and if it does not change it then there is no need to take a reference to it earlier, you can just take it after the "player" system has finished. I know that coming up with brief examples is extremely difficult but either the example does not properly represent the reality, or the author is actually working in a multi threaded environment (maybe without actually knowing about it).

As an observation, the author mentions multiple times that Rust pushes you to write "good code" and I fundamentally disagree. "Good code" is very contextual, just like the idea of "simple code" where he checks for all collisions and plays a sound in 3 lines (is this actually "simple"?), so instead I would say that Rust forces you to write "correct code", that is, code that won't (or is unlikely to) fail at runtime. I believe this is a very important distinction that you always need to keep in mind when evaluating a tool such as a programming language.

Finally, I do believe though that Rust is a bad choice if what you're interested in is to build games quickly without consideration for performance (and you most likely don't need to care in 2d games) and their decision is very reasonable: C# and Unity are just aligned better with what the author is actually interested in doing.


I’ve only done a few toy projects using ECS, but the author seems to be struggling with ECS on a basic conceptual level. E.g.,

---quote---

For example, modelling Health as a general purpose mechanism might be useful in a simple simulation, but in every game I end up wanting very different logic for player health and enemy health. I also often end up wanting different logic for different types of non-player entities, e.g. wall health and mob health. If anything I've found that trying to generalize this as "one health" leads to unclear code full of if player { ... } else if wall { ... } inside my health system, instead of having those just be part of the big fat player or wall systems.

---end quote---

The solution here is to have a Health component but not a generic Health system (actually, a generic Health system sounds like a code smell for another reason, because systems map to actions, while components map to attributes; systems that interact with a Health component would be things like a damage or healing system) -- but if you need something to work different for player health and wall health and enemy health, you can just have three systems, which, respectively, do Query<Player, Health>, Query<Enemy, Health>, Query<Wall, Health>.


Excellent article. Rust is my favorite language for several uses, but I'm becoming less optimistic about it as a versatile language long-term. An important point regarding the article is that it mixes rust shortfalls, and shortfalls of any language other than C++ for games. I've personally used Bevy for a 3D wave function renderer, but moved away from it in favor of a custom WGPU-engine due to Bevy's complicated ECS DSL.

I am worried because:

  - Games seem like a no-go, as articulated in the article
  - Web usage has been dominated by Async, with no signs of reversing. I have no interest in this. And, there is no promising Django analog or ORM.
  - Emebedded support on Cortex-M, and to a lesser extend RISC-V is good at its core, but the supporting libraries have a mix of A: the game failures listed in the article (Driven by hype, mostly makers, serious companies are not using it), and B: Also being taken over by Async.
This is disappointing to me, because IMO the syntax, tooling, and general language experience of Rust is far better than C and C++.


Why is async such a dealbreaker for this guy? Especially for web dev.


I think the async Rust shortcomings are largely solvable, but I sympathize with everyone running into the numerous problems right now.


What's bad about it? Last time I used Rust was before that feature existed.


High cognitive complexity, incongruity with other language features, interface virality (the bemoaned "function coloring problem"). I think much of the downsides are more a result of slow moving language design than intractable fixtures.


This is a fantastic article. Thorough, nuanced, well-articulated, and rooted in lots of real experience


> Orphan rule should be optional

That has got to be the most "I didn't think this through" take ever.

While it's a known pain in the ass. Not having it is a bigger pain.

The moment you allow this, you have to find a way to pick between several implementation - and they don't always have sane names.

Orphan rules prevent this from happening.


Someone who has experienced real problems as a result of a specific mechanism is not required to solve every single problem with alternatives to that mechanism before saying "this mechanism has caused me real problems and it'd be nice if there were a better alternative that didn't cause those problems".

> The moment you allow this, you have to find a way to pick between several implementation - and they don't always have sane names.

There are other possible solutions that don't involve that.

For instance, many applications would be quite happy with "There can be only one implementation", giving a compiler error if there's more than one.

A slightly more sophisticated rule would be "Identical implementations are allowed and treated as a single implementation". This would be really convenient in combination with some kind of "standalone deriving" mechanism, which would generate identical implementations wherever it was used.


Disclaimer: I'm aware you guys are working on relaxing orphan rules, and I wish you the best of luck. But as an outsider, orphan rule doesn't seem to be going anywhere soon.

And if the original poster had said that I would be ok. Instead what they said is:

> It's a great example of something I'd call "muh safety", desire for perfection and complete avoidance of all problems at all costs, even if it means significantly worse developer ergonomics.

This implies the writer didn't assume what happens if you "turn-off" orphan rules. I.e. you don't trade perfection for developer ergonomics, you trade one set of developer (ability to write any trait for any type) ergonomics for another (having to battle two incompatible trait implementations from crates you don't own).

Either you have to manually specify how nearly every implementation is pulled (horrible developer ergonomics) or, even worse, you go into monkey patching territory.

> For instance, many applications would be quite happy with "There can be only one implementation", giving a compiler error if there's more than one.

Ok. But you still need a resolution mechanism to fix the error. Which implies manually solving ambiguity. And how do you solve it for blanket implementations?


> But you still need a resolution mechanism to fix the error

At least initially, the resolution mechanism could be "don't include more than one implementation".


Assuming you have no control over trait implementation, how would that work?

To reuse the canonic example:

    // crate "types"

    pub struct Thing;

    // crate "traits"

    pub trait Action {}

    // crate "alpha_v0" 

    impl traits::Action for types::Thing {}


    // crate "beta_v1" 

    impl traits::Action for types::Thing {} // Doesn't exist for beta_v0
Say by transitive dependencies, alpha_v0 and beta_v0 are imported, and due to vulnerability, you upgrade beta_v0 -> beta_v1 (let's assume that trait is essential for the vulnerability resolution).

Now what? You either have to skip future updates, or keep the vulnerable beta_v0 crate.


Yes, that's the downside of relaxing the orphan rule. That doesn't mean there has to be a way for the top-level crate to work around that, other than avoiding having both of those in its dependency tree.

Ideally, people would tend to put trait implementations for a given pair of crates in a unique library crate, implement them in the obvious way, and provide that as a library for people to use.


You are being modest in explanation.

That's not a downside, that's dependency hell with Trait coherency landmines.


> This implies the writer didn't assume what happens if you "turn-off" orphan rules

It implies that the writer would prefer using a different language, not that Rust would be better if it was all the same but with the parts he doesn't like taken out


> It implies that the writer would prefer using a different language

That's not what they wrote.

> There are mostly valid reasons for wanting the orphan rule for things such as libraries uploaded to crates.io, and I am willing to concede that crates published there should obey this.

> But I have a very hard time caring about this rule for applications and libraries developed in end products. I'm explicitly not saying binary crates, because most bigger projects will be composed of more than one crate, and many will be more than one workspace.

It talks explicitly about Rust. In hindsight, the author might have been better suited by another language, but expecting Rust to suddenly become another language is weird.


C minus minus is needed to strip away as much as possible from C++


Yup. Rust is too complicated. You don't think about getting the job done, you think about language specific crap that is just overhead. And it's ugly to look at.

I tried to like it, but I can't. It doesn't align with my way of thinking.


I've been thinking of embarking on a Rust/GUI/Game. Guess, this will save some grief. Is Rust the future? Will this situation improve? I've been wanting to like Rust, but this seems to be indictment.


It’s unlikely to improve anytime soon, and there are no indications that Rust will be the future of game programming.


> Is Rust the future? Will this situation improve?

I don't follow this too closely, but I have not seen any indications that any of the frustrations expressed in the article are on a Rust roadmap.

IOW, just about everything in that article is a `WONTFIX` or `NOTABUG` for the Rust maintainers.


Really well written, and well explained without any bait


Jesus, now that's what I call comprehensive.

On the "hot reloading" remark: I believe that, to some extent, compiled languages that lean into metaprogramming are innately at odds with the concept of hot reloading. You're spitting out a (mostly) monolithic binary - rewriting that on the fly just isn't going to be reliable beyond an extremely basic level, and shoving it all into some kind of VM for the purposes of hot reloading introduces variance and general performance overhead that both mean that the "hot reload" environment is no longer an accurate depiction of the real application's behavior.


the hype is starting to come down


You might not need memory safety.


You do need memory safety for multiplayer games, though. Either that or you should use a GC language, which frankly is good enough for most indie games.


I think there's a certain type of mind that can truly appreciate Rust as a language and enjoy developing with it. When I see quotes like,

> I wasn't thinking "what's the right way to get a random generator in here" or "can I assume this being single threaded" or "am I in a nested query and what if my archetypes overlap", and I also didn't get a compiler error afterwards, and I also didn't get a runtime borrow checker crash. I used a dumb language in a dumb engine and just thought about the game the whole time I was writing the code.

and,

> The prevalence of perfectionism and obsession with "the correct way" in the Rust ecosystem often makes me feel that the language attracts people who are newer to programming, and are easily impressionable.

I see someone who simply does not think about those kinds of things when writing code. And that's completely, entirely fair. Rust is not for them, then.

But they seem to act like it's an issue with the language that it does not serve them specifically, and their way of thinking in particular, not even pertaining to game development.

Because the thing is, I don't suffer from the issue they're describing. I don't find it difficult or distracting to think of edge cases and implementation details when I am writing out a solution. In fact, I can't help it. I love Rust, because its strong typing and strict static analysis actually support and justify my way of thinking. They're not obstacles for me to overcome, certainly not like how it's described here.

When I use a language like JavaScript, people tell me that I care too much about details or that I don't need static type information because I can just assume. JavaScript is not for me, because it doesn't support my way of thinking. It is terrible and sloppy and completely unchecked. It's absolutely great for banging things out without giving a care in the world about a single implementation detail that isn't relevant to the actual problem at hand. It's terrible if you actually do care about those implementation details, because nobody else who writes JavaScript does. Everything you interact with is going to be just as shoddy as the language itself.

(I have a job writing JavaScript, so this doesn't mean that I can't use the language. It just means I do not like it. I do like TypeScript.)

So this might just be a fit issue. I haven't read the rest of the article yet, because this stuck out to me. I see some other HN comments talking about async code and GUI libraries, and those are all completely valid concerns, but in the article these valid shortcomings are seemingly mixed with what I'm going to call "neurotype issues". In other words, I suppose the author just isn't autistic enough. It has nothing to do with being new to programming or not.

And that's fine. It's just not an issue with the language that it doesn't serve you as well as it serves others. After all, Haskell is the same way. I'd say most programming languages are the same way.


&str vs String. Oh boy.



This is missing a few useful ones, like Box<str>, Arc<str>, Cow<'a, str>, SmallVec<u8>, transmuted newtype references like &UserId, and of course the string type you implemented yourself because the previous ones were not good enough.


Yeah, if std::string vs std::string_view is "a major hurdle"... I don't know what so say


Where did you see "a major hurdle" written?


You are doing exactly what is being criticized in the Rust community. Bad faith reads of someone who spent significant time getting into the language.


tl;dr -- Rust is not the language for startups who want fast iterations and are still finding it's pmf. It is perfect for rewrites of popular system software which already has millions of users and running in production because once a software becomes mature and at million user scale, safety and security becomes paramount


> for rewrites of popular system software which already has millions of users and running in production

Hopefully, it's perfect for more than that.

Rewriting a system that's already acheived that scale is a contentious decision, to say the least. Many of our colleagues have been lost to and scarred by well-meaning attempts to rewrite those systems in the past, including many who were sure it was the right choice for "safety and security" themselves. For every conceptual safeguard new tooling might give you, you invite countless and sometimes catastrophic regressions in actual application logic in writing new implementations. It can work out, but no tool can expect to live only on that kind of work.

I'm sure Rust has much broader applicability than that. Or I hope so at least.


At million user scale, feature parity and compatibility is truly paramount and, `rg` aside, Rust utils don't have a great story to tell there. Such software has evolved in the course of decades and its (usually despair-inducing) code is tailored for dealing with a tremendous number of edge cases, platforms and architectures. Rewrites in Rust tend to write "the cool parts" and indeed write them better and faster, but that is just not good enough for replacing the standard tools.


Honestly I have not read all the article. What language they are going to use instead of Rust?


C# with Unity.


The TLDR I got from that: Normal coding has two concerns:

1. What behavior do I want?

2. How am I implementing that behavior?

Experimenting with #1 is slowed down by the current end point of #2. Sometimes not at all, sometimes a lot, depending on luck and how #2 anticipated the class of experiment I am trying.

Rust adds:

3. How can the implementation code be organized so its stringent safety checks are validated?

Now experimenting with #1 is complicated by two dimensions of design history instead of one. And the latter dimension being two steps of abstraction/design-dependency away from concern #1, is going to be very brittle.

Never used Rust, but that sounds painful.


Well, I needed something to do while I wait for the next Vampire Survivors addon. Purchased the full bundle. Good luck, game devs.


I forgot, when you say anything remotely negative about Rust or Ruby on HN, the idiots get butthurt


> The problem you're having is only a problem because you haven't tried hard enough.

You just need to read another 50,000 word fasterthanlime essay. Then you'll not have problems any more.

> That being said, there is an overwhelming force in the Rust community that when anyone mentions they're having problems with Rust the language on a fundamental level, the answer is "you just don't get it yet, I promise once you get good enough things will make sense".

Not only this, but they openly mock other language communities for not drinking the koolaid.

I like Rust, and the Rust community, and fasterthanlime, for what it's worth. But I think these points raised in the article are very much valid.


Parts of the Rust community are toxic indeed, but I've been around long enough to recognize the same pattern in communities of other hot programming languages or frameworks in their up-and-coming phase.

There's something about the new hot language/framework/paradigm that always attracts the type of people who wrap their identity up in the hot new thing. They take any criticisms of their new favorite thing as criticisms against themselves, and respond defensively.

What I see now in certain Rust communities feels a lot like what I saw in Golang communities in the early days. I had similar experiences when React was the new kid on the block, too.

Some of these communities/forums are just inhabited by people who thrive on dunking on noobs all day long. The good news is that they tend to get bored and move on when the next new thing comes out.

I'm already seeing this in my local Rust community as the chronically online people are moving on to Zig now. They even use one of the Rust discords to advertise their Zig meetups and proselytize about Zig at every opportunity. Eventually they'll get bored and move on.

> > That being said, there is an overwhelming force in the Rust community that when anyone mentions they're having problems with Rust the language on a fundamental level, the answer is "you just don't get it yet, I promise once you get good enough things will make sense".

I don't fully agree with this assessment. In my experience, Rust community members who arrive with well thought out complaints or suggestions are welcomed by the people who like working on programming language fundamentals. You can nerd snipe a lot of Rust enthusiasts by showing up with a difficult problem involving tricky parts of Rust, which will often result in some creative solutions and discussions.

On the other hand, Rust communities are inundated with people trying to write Rust as if it was their old favorite language. I've spent a lot of time trying to get people to unlearn habits from other languages and try to adopt the Rust way of doing things. Some people refuse to let go of paradigms once they've used them for years, so doing things the Rust way isn't going to work for them. I've worked with enough people who spent more time fighting against Rust than trying to adopt Rust ways of doing things that I've also reached the point where I don't engage once I see the conversation going that way. Can't please everyone.


"On the other hand, Rust communities are inundated with people trying to write Rust as if it was their old favorite language."

This is perfectly understandable if you view languages as tools. If the new tool can't do something the old one did, then you'll have questions about using it. A great example is inheritance - if inheritance is missing, that's a negative for me. I don't care about the philosophy, I just want to use the tool to produce programs better and faster. If it's missing features, that's a negative point.


> What I see now in certain Rust communities feels a lot like what I saw in Golang communities in the early days. I had similar experiences when React was the new kid on the block, too.

Go was released in 2012, Rust in 2015. Are you saying we are still in the early days of Rust?


> Go was released in 2012, Rust in 2015. Are you saying we are still in the early days of Rust?

Release dates mean very little. Golang had rapid adoption early on. Rust only recently became one of the fastest growing languages. Golang stabilized a lot of things about the language and library early on. Rust still has a lot of common features gated behind nightly.

There is a fun tool for comparing the number of PRs on Github by language that shows the difference: https://madnight.github.io/githut/#/pull_requests/2024/1

So yes, I think we're in the early days of Rust.


Ruby was released in 1995 and didn't really have its heyday of developer hype until Rails came along a decade later. Python didn't start gaining serious traction for about a decade after its 0.9.0 release either. Go's immediate uptake seems like an outlier in programming language lifecycles.


Yes.

C was invented in the '70s and only got standardized 20 years later.

And Rust's ~20 years is young in systems lang terms (the alternatives, C and C++, are 50 and 40 years old).

And nobody had TikTok back then.


> And Rust's ~20 years

huh? It's only been about 9


It was started in 2006 and sponsored by Mozilla from 2009 on.


You’re moving the goalposts though, the question was about Go. Which 3 years ago was nothing like Rust today. Because it’s much more pragmatic “here’s the trade-offs we had to make and why” language. Growing in and out of a cult isn’t a natural part of a language’s evolution like the GP comment suggested.


This is definitely untrue. I was bullied out of the golang community for asking about generics when I first started learning it. Obviously, I don't think my experience is indicative of the entire community, but the experience from the community left a bad taste in my mouth.


> Parts of the Rust community are toxic indeed, but I've been around long enough to recognize the same pattern in communities of other hot programming languages or frameworks in their up-and-coming phase.

Yeah. I think there’s also a weird way that newcomers get treated when they join a community. When you’re a newcomer to Rust, you probably have some preconceptions about how Rust should work based on the other languages you know, and you’re probably running into a lot of the same problems with Rust that everyone else has (e.g. borrow checker).

Most of the community is just kinda tired of the discussion and tired of answering the same questions, so they don’t interact with the noobs at all. That means that the people who, as you say, “thrive on dunking on noobs all day long”, are the primary point of contact for noobs.

Finding a decent programming community these days is a pain in the ass. The cool people, i.e., the people working on cool projects and getting shit done, are mostly busy and not hanging out with anybody.


> Parts of the Rust community are toxic indeed, but I've been around long enough to recognize the same pattern in communities of other hot programming languages or frameworks in their up-and-coming phase.

I would say that in general, this type of attitude permeates a lot of software engineering, and even engineering and science as a whole. When I speak with people in other fields, particularly more creative ones, the discussions are so much more improvisational and flowing. Discussions with software developers, engineers, and scientists have this very jerky, start and stop "flow" to them as you meet resistance at each step. I honest to god have had people telling me no or shaking their head no before I ever finished my question or statement, much less before they even understood it.

> There's something about the new hot language/framework/paradigm that always attracts the type of people who wrap their identity up in the hot new thing. They take any criticisms of their new favorite thing as criticisms against themselves, and respond defensively.

You're spot on about the coupling of someone's identity with it. Rust especially seems to also have this never-ending gold rush to be the next framework and library everyone uses, which creates a very competitive landscape. And it seems most frameworks and libraries get 80% of the way and then fizzle out once people realize Rust isn't a magical language that solves all your problems.


Conversations that people have in science and engineering are more analytical and pessimistic, where they try to clarify and set the boundaries.

This is the opposite of creative and optimistic conversation, where you break boundaries between things to create something new.

The attitude to be analytical in conversations comes from the fact, that software engineers usually do creative part of their work alone, and use communication for analytical part (code review, requirement clarification, etc). For some creative people it's natural to do creative work together, especially in music, so it's easier for them to adopt a creative attitude in conversations.


There are many different ways to solve problems. Having tunnel vision will exclude most of them. Critical thought has its place in any field, but many scientists and engineers will hide behind so called analytical thought when in reality, the ideas are more biased than they'd like to admit.


I think problem solving has analytical and creative parts too. Like in Polya's 'How to Solve It' you have clear analytical steps in the beginning (what's known, what's unknown, etc), then 'Boom! Heuristic!', then again analytical steps for reflecting back on the solution (corner cases, did you use all the inputs, etc).


unrelated: the word "toxic" is meaningless nowadays e.g., "dunking on noobs" is more specific.


Rust is a language with a very high mental cost to use effectively. It requires some fundamental paradigm shifts in the way you write software, hence it's a hard language to be productive in.

For some tasks, the tradeoff is worth it. Some individuals are naturally very inclined to use it.

But for most people, for most tasks, it's simply not the right tool. It's not a general purpose language by any stretch of the imagination.


If there were only two programming languages in the whole world, just two, no more; one better fitted for OS kernels and one better fitted for game programming, which one would be considered a general purpose language?

I would argue, OS programming is definitely more general than game programming. Rust is great for OS related tasks, math, deep learning n other stuff, and maybe not so great at GUIs, or even bad at that, or just inconvenient as the article points out.

That means, it should be used for general stuff, and for programming games should be used a niche language, like Lua, Python etc. Especially for fast iteration and experimentation, break things and move lightning fast, typed languages are in disadvantage for sure, compared to untyped/weakly typed.


I would argue that OS kernel programming is actually a very specialized niche of software engineering; it demands near iron performance, fine control down bit level granularity, perfectly deterministic memory manipulation, flawless security and maintainability, and all at the same time. These are conflicting requirements and meeting them naturally spills over into decreased productivity, which is acceptable because OSes are well engineered and evolve over decades.

If C or Rust would not be available, you could still write your kernel in a well documented assembly dialect - some people still do - and then complete your OS userland in some garbage collected/interpreted/WASM abomination. But it's unlikely you could write Microsoft Word or a modern browser in assembly.

And this I think is true for games too, since, other than the performance requirement, they don't need the other features of Rust and are quite tolerant of failure. They also tend to be unwieldy beasts hard to design cleanly because the internal world simulation would become ever more complex, imposing weird data access patterns etc. If your online play code suddenly requires access to some internal "tank fuel" state, in C you would just pass the pointer and pray to the goods of rock and roll it's still valid at access time; a garbage collected language would at least give you that; but in Rust, you are looking at a complete refactoring of the entire "tank" module to make the new access pattern fit the ownership model.

As the post explains, this will kill quick iteration, which I think is a very general need of most programmers.


>> The problem you're having is only a problem because you haven't tried hard enough.

And it actually does work that way with Haskell, in my experience. There's a big hill to get over where you flail against the type system, IO monad and all, and then you realize that, while Haskell's type system isn't perfect, being able to say

    Num t => (t -> b) -> [t] -> [b]
is really pretty powerful, and being able to search for functions by type signature is just plain convenient.

But Haskell isn't for every problem in the way Rust is apparently made out to be. For example, I've seen some posts about developing games in Haskell, but it isn't common, and nobody would try to push a game developer into using Haskell.


> being able to search for functions by type signature is just plain convenient.

It's certainly not all the way there, but now that we are over the mostly empty promises of encapsulation and (worse) trying to model the world with classes, but grouping by that implicit first argument type for discoverability is exactly what OOP is, in this age of post-OOP.

The "you're not trying hard enough" position reminds me a lot of the Scala community's disdain for java-in-scala-syntax: if you don't use it like an almost-Haskell you're on your own (1). Here, I think Rust is actually more open: I don't read too much of the community, but most of what I saw seemed to be quite welcoming to the idea of placating the borrow checker with refcounting when "the rust way" fails to sufficiently click.

((1) but with Kotlin having taken over the entire not-almost-Haskell part of scala it's not an issue anymore: is has its niche and it fills it well, no more taking over the entire JVM while also magically converting everybody into an almost-Haskellian)


Maybe https://dev.epicgames.com/documentation/en-us/uefn/verse-lan... is as close as we're going to get to game dev Haskell.


FWIW, I believe Else Heart.Break()[0] was written entirely in Haskell.

[0]: https://en.wikipedia.org/wiki/Else_Heart.Break()



The link works, but the link parser on HN (of all places) is confused by the parens.


Agree. Some languages like Haskell have a steep learning curve, until you grok it. Then you have a big productivity gain.

But this article is seeming about someone that has put in the time, has written a lot of Rust, and now says, it wasn't really worth it.


What does this after do with the topic?


I'm comparing one unusual language (Rust) with another language (Haskell) and examining why the first gets push-back whereas the second gets more positive mentions.

They both have a single odd idea that's hard to wrap your head around at first, Rust's borrow checker and Haskell's type system with its enforced immutability by default, and which intentionally drives program design, in that both languages demand you write your program such that the static analysis can prove it's valid, but Rust's borrow checker seems to get more people throwing their hands up and leaving the language than Haskell's type system does.

Haskell's type system can be a bit inexpressive, sometimes, too, and there are language extensions which address some of this, but it seems to hit more of a sweet spot where, in most programs, if the type system is forbidding something, you really do have some kind of error in your thinking leading to inconsistent code. Plus, it enables design instead of just forbidding, because you know that any function which is valid in the type system is at least sensical to use in a given context.


this reminds me of Lispen of ye olde.. wait untill it clicks, man!


He mentions Common Lisp as something he did a game in, and you can see the link he praises https://www.youtube.com/watch?v=72y2EC5fkcE as basically a very well-developed Lisp image approach with timetravel debugging etc aimed at game dev. So the implicit comparison to Lisps is sort of a big undercurrent to this whole article.


A lot of the stuff he mentions he wants I kept thinking "Oh, you want common lisp!"

I dabbled in some game dev in CL and loved it. I think the ideal would be low level engine, lisp on top driving it. If you could do it without the FFI barrier slowing things down a crazy amount, it would be the killer combo.


Lisp is pretty normal if you're accustomed to, like, Python: It's a GC'd procedural language.


Haha, nice one! :-)

(can only assume you're joking)


Since every other language now has higher order functions and lexical closures and things like that Lisp isn't as special as it used to be. Now it's mostly just a really ugly version of those languages but with a weird emphasis on recursion and a very fancy macro facility.

I find I can move between dynamic languages and Scheme pretty easily.


as someone who writes elixir all day, I really love what lisp is but I feel lisp's major weakness is that it makes everythign unergonomic in order to make macros very ergonomic. You will never find another language in which writing macros is easier. the problem is it comes at the cost of doing everything else a bit of a pain in the ass.

I love elixir because it feels like the syntax clojure would have if it had a syntax imho.


> major weakness is that it makes everythign unergonomic in order to make macros very ergonomic.

Generally Lisp has two main goals:

* allow code to be data and data to be code

* make coding and coding code (let code manipulate code) interactive

The s-expression based syntax was found to be useful for both. For the latter it means that code can be manipulated interactively for example by structure editors or in read-eval-print-loops which work with data. That makes interactive code writing very ergonomic.


> it makes everythign unergonomic in order to make macros very ergonomic

I think 'everything is unergonomic' is a too strong of a statement to be true.

I'd argue that lisp is very ergonomic for those who work with it regularly (and are into using Emacs). It sounds like you are beefing with the syntax (or lack thereof) and homoiconicity of Lisp. I can understand that. It is a very different language than others currently in the mainstream for that reason.

As far as overall ergonomics, I'd say the REPL/image-based development style and the macros that are enabled due to homoiconicity actually make it one of the most ergonomic languages in existence.

The biggest thing that prevents Lisp-likes from going mainstream is that it is too ergonomic, specifically, when you start reading a Lisp codebase, you essentially are signing up to learn a new project-specific dialect. Very ergonomic for writing code, but at the cost of understanding how that code operates.


> lisp's major weakness is that it makes everythign unergonomic in order to make macros very ergonomic.

That is simply false. The ergonomics of editing Lisp is also superb.

There is a consistent, logical way to break any Lisp expression into any number of lines of text. The more you break Lisp into multiple lines, the more clearly the tree structure of the code is revealed.

The absence of ambiguity helps readability: not having to guess which expressions are children of what operator.

Lisp code sometimes makes up for the parentheses by omitting superfluous punctuation like commas and semicolons. To add two terms, we need ( ) and +. But that's all we need to add 17 terms also.

Imagine if the Unix shell required commas between arguments:

  ls, -l, *.c
that's how Lisp programmers feel when back in a non-Lisp.


Lisp is homoiconic which most other languages are not. This makes it useful for certain applications over others.


Yep, but that and the closely related macro system are about the only unique features left. I’ll add “has a spec” and “is reasonably fast” too.


Julia has convinced me that the value of this is limited. It turns out to be enough to have the AST as a first-class value in the language, which can be manipulated and emitted via macros at compile time, or fed to eval at run time.

The more usual Lisps (I consider Julia, like Dylan before it, to be a Lisp) give up most of the advantages of syntax in exchange for an admittedly eloquent macro system. Julia gets the advantage of syntax, and a macro system which is of equal expressive power but less easy to use. Also, in Julia, a macro leads with `@`, the way a macro in Rust ends with `!`, meaning you can read the difference between a macro and a function at the call site. I consider that a good thing.


Some jokes are serious some serious statements are jokes. Perlis's epigrams are one example.


Inhad to look up what Perlins' (!) Epigrams are. Worth a read!

https://iiif.library.cmu.edu/file/Simon_box00075_fld05959_bd...


Yeah, this hits hard. Despite being proficient in a bunch of languages in college, I could just never wrap my poor brain around Lisp.


That’s sort of different, right? Rust is just another systems language. Lisp is the language that nobody knows but everybody secretly believes is better than their language of choice.


This was my biggest problem with rust, the community is, at a surface level friendly, but the moment you try to say something against the grain you get met with some of the most insufferable people ever. I tried to mention a problem with a lint on the official forms and the replies were so condescending (from regulars at that too). And at no point did they try to understand the issue, they just assumed I was dumb/new/doing something wrong.


The attitude is perhaps not surprising from a community whose unofficial tagline is "Rewrite it in Rust".


Sounds like a no-true-Scottsman fallacy


>> The problem you're having is only a problem because you haven't tried hard enough.

>You just need to read another 50,000 word fasterthanlime essay. Then you'll not have problems any more.

I'm reminded of a line from Hachiman in Oregairu: "Something is only a problem when someone makes it a problem."


> I'd argue as far as maintainability being the wrong value for indie games, as what we should strive for is iteration speed.

That seems to be the crucial point. Rust is optimized for writing complex systems software in large teams. That’s not a great fit for a small team hacking on something that is at least in part an art project. You wouldn’t choose something like Ada for that either.


I've ended up with similar thoughts about automated testing in games too.

I really enjoy having automated tests and automated tests solve problems. Like, I absolutely love our molecule test suite for our infrastructure code and it gives us huge value. The smoke tests we have for the infrastructure running after deployments are amazing. It's liberating in an environment of rapid and chaotic changes everywhere, complex interdependencies, as well as pressure on top.

However, if I try to transfer that kinda approach to a game it just... fails?

But the realization is: In games, many behaviors and systems are actually far simpler and less intertwined in arcane ways, and code changes actually less frequently and dramatically than at work.

I could see structured testing in e.g. turn based strategy games and such, so you can test that the culture per turn is correctly calculated based off of many different situations and such.

But in many smaller projects I've had, you write some janky code, make sure the enemy behaves as expected... and never touch or change that piece of code ever again. And it just doesn't break, because no fundamental part below it ever changes, because then the entire house of card would fall apart.

It feels dang weird, but it works very well for smaller projects.


I copied this exact same snippet and was going to comment the exact same thing.

Why choose Rust if you don’t care about maintainability and long term stability? Those are core values of the language!

The language choice was wrong from the start. C++ is king for games so if you care more about delivering features and fast prototyping why not go with that?

Maybe Rust is not a good language for rapid iteration in the game industry. And that’s ok I think.


C++ is king for game engines, but many games opt for languages like Lua or Unreal Blueprints because C++ is to those as Rust is to C++

Personally I like Javascript


Right. Yes, you are correct. I assumed C++ was the language they replaced with Rust because they wanted to write lower level stuff, like their own engines (and they mention doing that).

It makes even less sense to use Rust to replace one of the higher level languages like C#.


Rust is not antithetical to iteration-based programming, it just makes you write a lot of heavy boilerplate to explicitly support that kind of style. The flip side of that is once the 'iteration'/'prototyping' phase is over, you can actually refactor the prototype into high-quality production code, instead of either throwing it away altogether and rewriting it from scratch (spoiler alert: this doesn't really happen most of the time, because it's viewed as pointless waste) or just putting it in production as-is (which is what people actually do, even though it's obviously a disaster in the longer run).


No, Rust is pretty antithetical to iteration-based programming. The language basically requires you to plan for the ownership model from the beginning, and it can be quite difficult to retrofit a changed ownership model into an existing program.

I've run into this in a side project I'm working on, where my indecision over which ownership models are actually workable in the API to satisfy the needs I have means almost all of the coding is spent just proving I can get an ownership model working over having a skeleton code that can do something. And still, even as lacking as functionality as this codebase is, swapping to a new ownership model takes several minutes of coding. Trying to do this kind of exploration on a codebase that has real functionality would mean spending hours of change just to move some property from here to there.


If you're genuinely unsure about the ownership model (this is pretty rare in practice though) you can just use Rc<>/Arc<> (which allow for shared ownership via refcounting) and be no worse off than if you were coding in Swift.


Surely there's better choices if the primary desire is iteration speed, and a secondary or maybe even tertiary desire is maintainability/refactorability


Rust gets in the way a lot, as it's supposed to for safety. Maybe it'd be a lot faster to iterate on if some AI could auto fix your code to make the compiler happy.


[flagged]


I think most people here understand that a language that is great for kernel development isn't necessarily great for everything else.


I work fulltime in Rust (embedded stuff) but actually think the "everything else" influence into Rust is stronger than the systems stuff, and it's harming the ecosystem.

Just try finding e.g. an MQTT or WebSocket or etc library that doesn't drag the whole mammoth tokio ecosystem (which is really geared for Web Scale! projects) in with it.

Rust is becoming the language that tokio ate, and Cargo/Crates.io the new NPM.

That is, Rust is the systems language that a wave of non-systems developers insist on using, leaving behind a trail of non-systems-appropriate crates and projects.

Parent commenter's comment was crap by HN's commenting standards... but the underlying point about trendiness I think is in fact accurate.


I used Rust professionally up until this year (since before async landed) and IMO you've hit the nail on the head. The shoehorning of async into the language and ecosystem seemed to be driven by the belief that if Rust can't attract Go or Python developers then it will fail. In fact Rust was doing fine in its niche attracting C++ developers and trying to make it everything to everyone has diluted what was great about it.

(That said, your example might be a bit exaggerated as I found mqtt libraries that don't require tokio).


Just like C++, teams will end up having to carve out a sane subset comfortable for their domain.

For myself that's looking more and more like:

Ditch crates.io (and maybe even Cargo), carefully curate and vendor all dependencies.

Probably avoid async, but definitely avoid tokio.

Don't get excessively clever. (Here I think Rust does a better job of C++ of having good "community standards" already)


I might be inclined to buy this complaint if tokio was part of the standard library. But it isn't, and I just can't see how its mere existence and tree of dependents has a negative effect on the "ecosystem", it's not preventing anybody interested in "embedded stuff" from producing high quality libraries that are not dependent on tokio, and I don't buy that there is some fungible mindshare being taken away. This to me seems more a gripe that Rust is not being adopted quickly enough in certain domains, which I believe has far more to do with both reasonable and unreasonable inertia.

(How is tokio fundamentally different than boost.asio and beast in this regard?)

> That is, Rust is the systems language that a wave of non-systems developers insist on using, leaving behind a trail of non-systems-appropriate crates and projects.

I have hard time understanding how this "trail" of stuff (evoking imagery of pollution) that no one is forced to depend on nor is promoted in any special way in the core library or language is any meaningful impediment to developing and distributing libraries more appropriate to the embedded domain, but perhaps one can explain how.


It can be though. You should be able to express low-level details but also implement high-level constructs effortlessly should you so choose.

Look at C++, which can both dick with move semantics but also offer multiple inheritance. Even better, look at C#, which has pointer fiddling but also offers the best reflection of any language today.

You're right that Rust is only really good for kernel development but it didn't have to be that way.


Oh the language is good, it's the people. Cult-like.


> Oh the language is good, it's the people. Cult-like.

I used to read something similar all the time around 20 years ago: I don't have anything against Jesus. It's his fan club that's a pain in the ass.


Would that make rust a cargo cult ?


... complains about people using tropes and tired cliches in posts while using the worst of them, themself


I do not complain. I've seen this trend multiple times by now. First time was with Ruby on Rails I think [1], then the cancer that led us to write JS to output HTML and all the good stuff that came and still manages to pour from npm. Every now and then you can spot the "fashion" out of these things.

Rust is good and has earned its place. I just despise cult-like followings for these languages and technologies.

[1] https://www.youtube.com/watch?v=YZeZsZEEpno&themeRefresh=1


There's no need for you to blather about "woke" and drag in US culture wars nonsense. It's distasteful, glib, and frankly not very intelligent, and just subtracts from whatever your point was.


As I said above, cult-like. They will swiftly call you names if you even dare to question their beliefs. Thank you for proving me right kind sir.


Your post is "flagged" to oblivion, just like my earlier post questioning the "inclusive" culture of the Rust community.

What is it called when a person or group's actions don't match their words?


> Your post is "flagged" to oblivion

Personally, I think downvoting insults is more inclusive than not downvoting insults.

> just like my earlier post

You made an entire submission asking users of a specific programming language to defend themselves.

It's a bad HN submission.

Flagging it is neither inclusive nor uninclusive. It's just marking it as a bad submission.

How do those actions fail to match words? What words do you have in mind?

Also the HN users that flagged it probably don't even use Rust.


So your magic ball tells you why did people hit the "flag" button on my post. Thank you for sharing that.

For a moment I thought every hint of criticism towards Rust is suppressed or censored by the Rust zealots.


Now they need to try blockchain :)


hah Rust didn't get gamedev (success at all costs) and it's a disaster


Zig zig zig zig zig :D


Congrats on the game launch. Funny way to promote it. Good luck, hope you get some sales!


This is a decent article, but although the points themselves are valid, I think there's a core "issue" with (indie) gamedev itself.

The vibe that I'm getting is that it's filled with people that don't particularly care about programming, they just want to get stuff done(TM), this is also highlighted by the fact that they are willing to write completely inadequate code just to see things working. Rust is not that, and that's a good thing.

More generally, I'd say that in gamedev anything goes, as long as it's fun and isn't too buggy. Rust is not, and never will be able to accomodate that mindset, which again, is a good thing if you think for 2 seconds and consider what Rust is actually aimed at, which is safe systems programming.

You can have the core engine written in Rust and have a scriptable language on top of it, there aren't any major pain points in this regard. The scriptable language will be able to provide all of this hot-reloading-anything-goes-yes-sir bullshit that we all know and love.

tl;dr: Use the right tool for the job. A language designed for safe systems programming can't do non-safe non-systems programming very well. Who knew!

Virtually all of the points outlined in the article stem from the above.


It's a weird concept to "care about programming". Same as "inadequate code". I find this statement really condescending and completely confirms some of the cliché quotes that the author write in this article


By "care about programming" I basically mean writing maintainable code.

By "inadequate code" I mean code that does what the author wants at this point in time, but is completely unmaintainable and just bad. Sloppy practices, etc


Some people "care about shipping a product" others "care about programming".

That is why virtually no videogame is made with Rust.


In other words, "shipping a product" is incompatible with decent code.

This is why virtually all proprietary codebases aimed at "shipping a product" are a clusterfuck.


This infantilizing "Everyone that cares about development speed is making shitty garbage" is so obnoxious. You're not omniscient. You don't know any of these codebases.

There's so much software out there, and lots of it works. Focusing on the small amount that could be messy and insisting you're not like the "sheeple" coding it is so lame.


Well sorry, but it wasn't obvious

I still think it's a bit condescending because at the end of the day, especially in indie games, there are less constraints of maintainability, you just want to ship something that works fast. On top of that, the performance constraints force you to do things perceived as heresy in other areas (global singletons, functions you can call from anywhere, not catching any exception...)

And finally, since when you're doing a game you're often trying to do something innovative and very specific, you can't really just pick a book about design and implement it.

So, I think it's not about "caring" or not, it's just that it's an useless overhead


It's said that software is never finished, only abandoned, but games actually do get finished. And at that point any maintainability is absolutely irrelevant. It's just very different to web programming.


Some software (other than games) can actually be fully finished, provided that it's scope is narrow enough.

It's really a question of differentiating between "finished" and "abandoned". Finished usually means that no new features are added, the only changes are bugfixes or minor improvements. Abandoned means no changes at all.

> And at that point any maintainability is absolutely irrelevant

This goes in line with my point though. The fact that Rust encourages writing maintainable code makes it unsuitable for gamedev, where speed of development and flexibility (even where it's bad) trumps everything else. That's why you need to have a scriptable layer on top of the core engine anyway.


Rust... what is it good for? "Systems programming" ...

Rust is not good for raw performance. Neither for prototyping and iteration.

Personally I think operating systems (kernels) should be as performant as possible, and C/C++ has been good enough for decades.

Anyone really unhappy with Linux/BSD/Windows/macOS performance?

What systems are we talking about that benefits from Rust? Advanced weapon systems that should absolutely not fail? Controllers for air planes? Traffic controllers? Radar? Power grid?

Google, fb, amazon, etc. use C/C++ to squeeze the most performance out of anything I/O heavy, and security is not an issue that deep in the stack, that's not the exploitation layer.


Ok, we're going from a bunch of complaints about Rust being bad for fast prototypes for indie development to the idea that Rust is bad at everything. This is silly.

"Syzbot and the Tale of a Thousand Kernel Bugs" [1] is my favorite talk on this subject. The Linux kernel is adding security bugs faster than they can be fixed. This is an unsustainable situation--it is not "good enough"--and nobody really has any ideas to significantly improve things beyond adoption of memory safety.

[1]: https://www.youtube.com/watch?v=qrBVXxZDVQY


> Ok, we're going from a bunch of complaints about Rust being bad for fast prototypes for indie development to the idea that Rust is bad at everything.

No, not at all, that's not what I meant. I think it's forte for now is better security at the cost of some performance and productivity (iteration speed).


> Anyone really unhappy with Linux/BSD/Windows/macOS performance?

I wasn't unhappy with Windows' performance in the 90s until I saw BeOS. This is the kind of thing we're in the dark about what could be because the faster, rewritten fron scratch, free from API baggage systems are nowhere to be found.


Haven't used BeOS, but to clarify, I do have issues with Windows and performance, especially general input latency and responsiveness when compared to Linux, as I use both daily. But overall I think it's decent, and at the very least, not the fault of the language used, but more about legacy issues and technical debt. Anyone having worked with winapi and mfc knows the codebase is a mess.


Well, as far as I can tell, in the big language shootout (which is still the only decent language benchmark I know, but if you have others, I'd be happy to read them), in all the individual benchmarks I've looked at Rust is either first or within 3% performance of being first. So, this suggests that Rust is actually pretty good for raw performance.

In addition, I know that I'm way more productive for prototyping and iteration in Rust than in C, C++ (and I used to write C++ code professionally for a while), JavaScript, Go, Python (and I've been writing professionally in these three languages for a while now) etc. I _might_ be more productive in Java or OCaml. Yes, this is entirely anecdotal and it depends heavily on the kind of problems you're dealing with. I tend to focus on problems that have complicated invariants, and for which reproducing/pinpointing the issue in a debugger is a big annoyance.

> Google, fb, amazon, etc. use C/C++ to squeeze the most performance out of anything I/O heavy, and security is not an issue that deep in the stack, that's not the exploitation layer.

Interestingly, all these companies are migrating some of their systems to Rust. This suggests that they find the language convincing enough.


> Interestingly, all these companies are migrating some of their systems to Rust. This suggests that they find the language convincing enough.

I feel most large companies and some smaller ones are interested in trying out Rust, it's trendy. But time will show for which parts the switch was advantageous; and I'm very interested in the findings. The premise of the language is indeed convincing a lot of people. People do however, choose the wrong tool for the wrong job all the time, OP's article in point.


Absolutely.


> Personally I think operating systems (kernels) should be as performant as possible, and C/C++ has been good enough for decades.

If that how you want your OS, that's fair enough. But I think a lot of people are happy to trade (to some degree, at least) performance for security, and would prefer that their OSs are as secure as possible first, and as performant as they can be second.


No, I'm not concerned with Linux/BSD/macOS performance, it's pretty good (I took Windows out because it does have performance issues [0]!). What I am concerned about is security bugs, memory errors, buffer overflows etc, all of the things Rust attempts to solve [1].

I think for weapons & flight control, ATC systems, we already have Ada that's designed for "can't fail systems".

[0] https://www.blobstreaming.org/former-microsoft-developer-say...

[1] https://www.zdnet.com/article/microsoft-70-percent-of-all-se...


I agree, operating systems should have less out-of-bounds memory issues. Is that Rust's place? Replacing the parts of OS code where it's OK to sacrifice some performance?

Ada has a foothold for sure, but there's a surprising amount of C++ in military planes and weapon systems.

I didn't read [0], but the parts of Windows that's poorly performing is not the fault of the language.



Well aware, and it affects none of my points.


You asked:

> What systems are we talking about that benefits from Rust? Advanced weapon systems that should absolutely not fail? Controllers for air planes? Traffic controllers? Radar? Power grid?

You seem to be listing niche, specialised systems where a failure would be critical. But a simpler answer that is missing: mainstream operating systems — as I said, both Windows and Linux have been investing in Rust. Mainly as a way to increase memory safety.


And only time will tell us how that plays out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: