Hacker News new | past | comments | ask | show | jobs | submit login
Rust std fs slower than Python? No, it's hardware (xuanwo.io)
687 points by Pop_- 11 months ago | hide | past | favorite | 240 comments



There are two dedicated CPU feature flags to indicate that REP STOS/MOV are fast and usable as short instruction sequence for memset/memcpy. Having to hand-roll optimized routines for each new CPU generation has been an ongoing pain for decades.

And yet here we are again. Shouldn't this be part of some timing testsuite of CPU vendors by now?


I'm completely making stuff up here, but I wonder if this is the effect of some last minute (or even post-release, via ucode update) bug fix, where page aligned fast rep movs had issues or were subject to some attack and got disabled.


Then fast rep movs should have been disabled in cpuid altogether


They might have benchmarked a set of loads they care about and seen that on average leaving that set was better. But they are all conjectures.


So correct me if I am wrong but does this mean you need to compile two executables for a specific compile time build? Or is it just you need to compile it from specific hardware? Wondering what the fix would be, some sort of runtime check?


The exact nature of the fix is unclear at present.

During dynamic linking, glibc picks a memcpy implementation which seems most appropriate for the current machine. We have about 13 different implementations just for x86-64. We could add another one for current(ish) AMD CPUs, select a different existing implementation for them, or change the default for a configurable cutover point in a parameterized implementation.


This code is in the kernel, so dynamic linking and glibc is not really relevant.


It's the same design space. If glibc has ended up with 13 versions for x64, the kernel probably has a similar number. It's an argument by analogy for how much of an annoyance this is.


No, because the kernel has particular code size and ISA restrictions.


Admittedly I had missed that.

The kernel has self-patching mechanisms for doing effectively the same thing (although I don't know if it has ever been applied to memcpy before).


The sibling comments mention the hardware specific dynamic linking in glibc that's used for function calls. But if your compiler inlines memcpy (usually for short, fixed-sized copies) into the binary then yes you'll have to compile it for a specific CPU to get optimal performance. But that's true for all target-dependent optimizations.

More broadly compatible routines will still work on newer CPUs, they just won yield the best performance.

It still would be nice if such central routines could just be compiled to the REP-prefixed instructions and would deliver (near-)optimal performance so we could stop worrying about that particular part.


Some quick searching gives that FSRM is used for at least 128 bytes or so (ERMS for ≥~2048 bytes for reference); in base x86 (i.e. SSE2) that's 8 loads & 8 stores, ~62 bytes of code. At that point calling into a library function isn't too unreasonable (at the very least it could utilize AVX and cut that in half to 4 loads+4 stores, though at the cost of function call overhead & some (likely correctly-predicted) branches).


https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/...

Suggests that it should be usable for even shorter copies. And that's really my point. We should have One True memcpy instruction sequence that we use everywhere and stop worrying. And yet...


Unfortunately, we can't change the past, and seemingly in the past it wasn't worth it to have a fast One True memcpy (and perhaps to a decent extent still isn't). I'm still typing this on a Haswell CPU, which don't have FSRM (rep movsb of 16 bytes in a loop takes ~10ns=36 cycles per iteration avg).

But, yeah it does seem that my 128 bytes of a quick search was wrong. (though, gcc & clang for '-march=alderlake' both never generate 'rep movsb' on '-O3'; on `-Os` gcc starts giving a rep movsb for ≥65B, clang still never does)


They are, glibc already has an ERMS code path for memcpy.


Did you miss the part about inlined calls? I was saying that if it weren't for these issues we could have them turn into a single instruction without worrying about perf.


I theory the compiler could always statically inline a memcpy to a (possibly nop-padded) rep mov. Then the dynamic linker could dynamically patch the instructions to a call to an out of line function if rep mov is known not to be optimal to the actual CPU the code is running. The reverse (patching a call to memmove with a rep mov) is also possible.


Generally the compiler will not inline a memcpy if it doesn't know the size it is dealing with.


Glibc supports runtime selection of different optimized paths, yes. There was a recent discussion about a security vulnerability in that feature (discussion https://news.ycombinator.com/item?id=37756357), but in essence this is exactly the kind of thing it's useful for.


Since the CPU instructions are the same, instruction patching at startup or install time can be used. Just patch in the correct instructions for the respective hardware.


This is generally a bad idea because it requires code modification, which has security implications. Most implementations will bring in multiple implementations and select the right one at startup (amortizing the indirect call into something like the GOT which already exists).


There are security hazards around writable + executable code. They don't apply to patching before execution (e.g. the install step) since nothing needs to be executed at that point. I don't think the security concerns apply during load time either - what does it matter if the text section is edited before it gets marked read-only&executable? It just means you're running a slightly different program, exactly as if it was edited during install.

In the memcpy case, where the library call is probably in a dynamically linked library anyway, it's particularly trivial to bind to one of N implementations of memcpy at load time. That only patches code if library calls are usually implemented that way.

Patching .text does tend to mess up using the same shared pages across multiple executables though which is a shame, and somewhat argues for install time specialisation.


On certain platforms, it would break code signatures if they are tied to the pages the code is on.


  > On certain platforms, it would break code signatures
macos?


Yeah, or iOS. Or other platforms that adopt a similar model


glibc has the ability to dynamically link a different version of a function based on the CPU.


You'd think the CPU vendor knows their CPU best. If there's a faster "software" implementation, why doesn't REP MOVS at least do the same thing in microcode?


Associated glibc bug (Zen 4 though): https://sourceware.org/bugzilla/show_bug.cgi?id=30994



The bug is also about Zen 3, and even mentions the 5900X (the article author's CPU).


If you read the bug tracker, a comment mentions this affects Zen 3 and Zen 4


I was prepared to read the article and scoff at the author's misuse of std::fs. However, the article is a delightful succession of rabbit holes and mysteries. Well written and very interesting!


This was such a good article! The debugging was smart (writing test programs to peel each layer off), the conclusion was fascinating and unexpected, and the writing was clear and easy to follow.


[flagged]


It's about making misuse difficult.

Rust doesn't actually restrict much. It would take looking at lots of details, but my impression is that it's less restrictive than C.


The rust compiler refuses to compile code which doesn’t adhere to a strict set of rules guaranteeing memory safety. Unless you intentionally call an unsafe block, misuse in this sense is impossible, not just difficult.


Just stating the obvious: breaking memory safety is only one subset of all the possible misuse available in the space of manners of using an API, data structure, etc.


But other types of misuse are not. For example, a naive program might do many reads into a small buffer instead of one read into a large buffer.


I'm a bit confused about the premise. This is not comparing pure Python code against some native (C or Rust) code. It's comparing one Python wrapper around native code (Python's file read method) against another Python wrapper around some native code (OpenDAL). OK it's still interesting that there's a difference in performance, but it's very odd to describe it as "slower than Python". Did they expect that the Python standard library is all written in pure Python? On the contrary, I would expect the implementations of functions in Python's standard library to be native and, individually, highly optimised.

I'm not surprised the conclusion had something to do with the way that native code works. Admittedly I was surprised at the specific answer - still a very interesting article despite the confusing start.

Edit: The conclusion also took me a couple of attempts to parse. There's a heading "C is slower than Python with specified offset". To me, as a native English speaker, this reads as "C is slower (than Python) with specified offset" i.e. it sounds like they took the C code, specified the same offset as Python, and then it's still slower than Python. But it's the opposite: once the offset from Python was also specified in the C code, the C code was then faster. Still very interesting once I got what they were saying though.


I'm a bit confused by why you are confused.

It's surprising that something as simple as reading a file is slower in the Rust standard library as the Python standard library. Even knowing that a Python standard library call like this is written in C, you'd still expect the Rust standard library call to be of a similar speed; so you'd expect either that you're using it wrong, or that the Rust standard library has some weird behavior.

In this case, it turns out that neither were the case; there's just a weird hardware performance cliff based on the exact alignment of an allocation on particular hardware.

So, yeah, I'd expect a filesystem read to be pretty well optimized in Python, but I'd expect the same in Rust, so it's surprising that the latter was so much slower, and especially surprising that it turned out to be hardware and allocator dependent.


It's just the spin of it that threw me off. You're right: "why is a C implementation so much faster than a widely used Rust implementation" is a valid and interesting question. But phrasing it as "why is a Python function faster than a Rust function", when it's clearly not the comparison at all, distracts from the real question.


I don't understand why Python gets shit for being a slow language when it's slow but no credit for being fast when it's fast just because "it's not really Python".

If I write Python and my code is fast, to me that sounds like Python is fast, I couldn't care less whether it's because the implementation is in another language or for some other reason.


Because for any nontrivial case you would expect python+compiled library and associated marshaling of data to be slower than that library in its native implementation without any inyerop/marshaling required.

When you see an interpreted language faster than a compiled one, it's worth looking at why, because most the time it's because there's some hidden issue causing the other to be slow (which could just be a different and much worse implementation).

Put another way, you can do a lot to make a Honda Civic very fast, but when you hear one goes up against a Ferrari and wins your first thoughts should be about what the test was, how the Civic was modified, and if the Ferrari had problems or the test wasn't to its strengths at all. If you just think "yeah, I love Civics, that's awesome" then you're not thinking critically enough about it.


In this case, Python's code (opening and loading the content of a file) operates almost fully within its C runtime.

The C components initiate the system call and manage the file pointer, which loads the data from the disk into a pyobj string.

Therefore, it isn't so much Python itself that is being tested, but rather python underlying C runtime.


Yep, and the next logical question when both implementations are for the most part bare metal (compiled and low-level), is why is there a large difference? Is it a matter of implementation/algorithm, inefficiency, or a bug somewhere? In this case, that search turned up a hardware issue that should be addressed, which is why it's so useful to examine these things.


If you're staying within Python and its C-extensions, there is no marshalling, you're dealing with raw PyObjects that are exposed to the interpreter.


> Because for any nontrivial case you would expect python+compiled library and associated marshaling of data to be slower than that library in its native implementation without any inyerop/marshaling required.

> When you see an interpreted language faster than a compiled one, it's worth looking at why, because most the time it's because there's some hidden issue causing the other to be slow (which could just be a different and much worse implementation).

On the contrary, the compiled languages tend to only be faster in trivial benchmarks. In real-world systems the Python-based systems tends to be faster because they haven't had to spend so long twiddling which integers they're using and debugging crashes and memory leaks, and got to spend more time on the problem.


I don't doubt that can happen, but I'm also highly doubtful that it's the norm for large, established, mature projects with lots of attention, such as popular libraries and the standard library of popular languages. As time spent on the project increases, I suspect that any gain an interpreted language has over an (efficient) compiled one not only gets smaller, but eventually reverses in most cases.

So, like in most things, the details can sometimes matter quite a bit.


> I don't doubt that can happen, but I'm also highly doubtful that it's the norm for large, established, mature projects with lots of attention, such as popular libraries and the standard library of popular languages.

Code that has lots of attention is different, certainly, but it's also the exception rather than the rule; the last figure I saw was that 90% of code is internal business applications that are never even made publicly available in any form, much less subject to outside code review or contributions.

> As time spent on the project increases, I suspect that any gain an interpreted language has over an (efficient) compiled one not only gets smaller, but eventually reverses in most cases.

In terms of the limit of an efficient implementation (which certainly something like Python is nowhere near), I've seen it argued both ways; with something like K the argument is that a tiny interpreter that sits in L1 and takes its instructions in a very compact form ends up saving you more memory bandwidth (compared to what you'd have to compile those tiny interpreter instructions into if you wanted them to execute "directly") than it costs.


> a tiny interpreter that sits in L1 and takes its instructions in a very compact form ends up saving you more memory bandwidth

There's a paper on this you might like. https://www.researchgate.net/publication/2749121_When_are_By...

I think there's something to the idea of keeping the program in the instruction cache by deliberately executing parts of it via interpreted bytecode. There should be an optimum around zero instruction cache misses, either from keeping everything resident, or from deliberately paging instructions in and out as control flow in the program changes which parts are live.

There are complicated tradeoffs between code specialisation and size. Translating some back and forth between machine code and bytecode adds another dimension to that.

I fear it's either the domain of extremely specialised handwritten code - luajit's interpreter is the canonical example - of the the sufficiently smart compiler. In this case a very smart compiler.


> On the contrary, the compiled languages tend to only be faster in trivial benchmarks. In real-world systems the Python-based systems tends to be faster because they haven't had to spend so long twiddling which integers they're using and debugging crashes and memory leaks, and got to spend more time on the problem.

This is an interesting premise.

Python in particular gets an absolute kicking for being slow. Hence all the libraries written in C or C++ then wrapped in a python interface. Also why "python was faster than rust at anything" is headline worthy.

I note your claim is that python systems in general tend to be faster (outside of trivial benchmarks, whatever the scope of that is). Can you cite any single example where this is the case?


> Can you cite any single example where this is the case?

Plenty of line-of-business systems I've seen, but systems big enough to matter tend not to be public. Bitbucket's cloud and on-prem version are the only case I can think of where you can directly compare something substantial between an implementation known to be written in Python and an implementation that's known to be written in C/C++ (and even then I'm not 100% that that's what they use).


I wonder if its because we're sometimes talking cross purposes.

For me, coding is almost exclusively using python libraries like numpy to call out to other languages like c or FORTRAN. It feels silly to say I'm not coding in Python to me.

On the other hand, if you're writing those libraries, coding to you is mostly writing FORTRAN and c optimizations. It probably feels silly to say you're coding in Python just because that's where your code is called from.


There is a version of BASIC, a QuickBasic clone called Qb64 that is lightning fast because it transpiles to C++. By your admission a programmer should think that BASIC is fast because he only does BASIC and does not care about the environment details?

It's actually the opposite, a Python programmer should know how to offload most, or use the libraries that do so, out of Python into C. He should not be oblivious to the fact that any decent Python performance is due to shrinking down the ratio of actual Python instructions vs native instructions.


I think maybe it's just semantics as long as everyone agrees where the speedup is happening (at the low level language calls).

I noticed that you're pretty hard in the "basic isn't fast, the thing it transpiles to is fast" camp, but still accidentally said "there is a version of BASIC [...] that is lightning fast" which I'm not sure you think? Highlights just how tricky it is to talk about where speed lives


I agree with that.

There is clear distinction between original language design (an interpreter) and a project aiming to recreate a sub-standard of that language and support its legacy codebase via a transpiler.


But you will care if that "python" breaks - you get to drop down to C/C++ and debugging native code. Likewise for adding features or understanding the implementation. Not to mention having to deal with native build tooling and platform specific stuff.

It's completely fair to say that's not python because it isn't - any language out there can FFI to C and it has the same problems mentioned above.


Because when people talk about Python performance they're talking about the performance of Python code itself, not C/Rust code that it's wrapping.

Pretty much any language can wrap C/Rust code.

Why does it matter?

1. Having to split your code across 2 languages via FFI is a huge pain.

2. You are still writing some Python. There's plenty of code that is pure Python. That code is slow.


Of course in this case there's no FFI involved - the open function is built-in. It's as pure-Python as it can get.


Not sure I agree there, but anyway in this case the performance had nothing to do with Python being a slow or fast language.


How is it pure Python if it delegates all of the actual work to the Kernel?


All I/O delegates to the kernel, eventually.

It's pure Python in that there's no cffi, no ctypes, no Cython, no C extensions of any kind.


It's pretty hard to draw this line in Python because all built-in types and functions are effectively C extensions, just compiled directly into the interpreter.

Conversely, you can have pure C code just using PyObjects (this is effectively what Cython does), with the Python bytecode interpreter completely out of the picture. But the perf improvement is nowhere near what people naively expect from compiled code, usually.


Yes, which is why I would argue that IO is a particularly bad benchmark here, since everything is just a thin layer on top of the actual syscall, and those layers don't do any real work worth comparing.

The only thing that makes sense to compare when talking about pythons performance is how many instructions it needs to compute something, versus the instructions needed to compute the same thing in C. Those are probably a few orders of magnitude apart.


Usually, yes, but when it's a bug in the hardware, it's not really that Python is fast, more like that CPython developers were lucky enough to not have the bug.


How do you know that it's luck?


Because the offset is entirely due to space for the PyObject header.


The PyObject header is a target for optimisation. Performance regressions are likely to be noticed, and if a different header layout is faster, then it's entirely possible that it will be used for purely empirical reasons. Trying different options and picking the best performing one is not luck, even if you can't explain why it's the best performing.


I suspect any size other than 0 would lead to this.

But the Zen3/4 were developed far, far after the PyObject header...


You can expect the Python developers to look very closely at any benchmark that significantly benefits from adding random padding to the object header. Performance isn’t just trying a bunch of random things and picking whatever works the best, it’s critical to understand why so you know that the improvement is not a fluke. Especially since it is very easy to introduce bias and significantly perturb the results if you don’t understand what’s going on.


We're not talking about random changes. We're talking about paying attention to the measured performance of changes made for other reasons.

Just like in this article. The author measured, wondered, investigated, experimented, and finally, after a lot of hard work, made the C/Rust programs faster. You wouldn't call that luck, would you? If there had been a similar performance regression in CPython, then a benchmark could have picked up on it, and the CPython developers would then have done the same.


You can look at the history of PyObject yourself: https://github.com/python/cpython/commits/main/Include/objec.... None of these changes were done because of weird CPU errata that meant that making the header bigger was a performance win. That isn't to say that the developers wouldn't be interested in such effects, or be able to detect them, but the fact that the object header happens to be large enough to avoid the performance bug isn't because of careful testing but because that's what they ended up for other reasons, far before Zen 3 was ever released. If it so happened that Python was affected because the offset needed to avoid a penalty was 0x50 or something then I am sure they would take it up with AMD rather than being content to increase the size of their header for no reason.


What you don't see in the logs are the experiments and branches that weren't pursued further because they didn't perform well enough.

Also: If you're going to prove that changes informed by performance measurements are absent from the commit logs, then you'll need to look in the logs for all the relevant places, which means also looking at I/O and bytes and allocator code.


Given that the performance is only affected by the size of that object header, the file I linked is all you'd need to see changes in. Look, the Python project is not picking their object sizes because it performs well on a quirk of Zen 3. End of story. I did performance work professionally in the past and now recreationally and this specific instance is 100% luck. This is not because I don't think the runtime people aren't smart or anything but this would be an insane thing to do on purpose.


because the offset here is a result of python's reference counting which dates ~20 years before zen3


I think the confusion comes from people not having a good understanding of what an interpreted programming language does, and what actual portion of time is spent in high versus low level code. I've always assumed that most of my programs amount to a bit of glue thrown in between system calls.

Also, when we talk about "faster" and "slower," it's not clear the order of magnitude.

Maybe an analysis of actual code execution would shed more light than a simplistic explanation that the Python interpreter is written in C. I don't think the BASIC interpreter in my first computer was written in BASIC.


Agreed. The speed of a language is reverse proportional to number of CPU instructions emitted to do something meaningful, e.g. solve a problem. Not whether it can target system calls without overhead and move memory around freely. That's a given.


>I don't understand why Python gets shit for being a slow language when it's slow but no credit for being fast when it's fast just because "it's not really Python".

What's there to understand? When it's fast it's not really Python, it's C. C is fast. Python can call out to C. You don't have to care that the implementation is in another language, but it is.


I constantly get low key shade for choosing to build everything in Python. It’s really interesting to me. People can’t break out of thinking, “oh, you wrote a script for that?”. Actually, no, it’s software, not a script.

99% of my use cases are easily, maintainably solved with good, modern Python. The Python execution is almost never the bottleneck in my workflows. It’s disk or network I/O.

I’m not against building better languages and ecosystems, and compiled languages are clearly appropriate/required in many workflows, but the language parochialism gets old. I just want to build shit that works and get stuff done.


Yeah, it's weird.


> individually, highly optimised.

Now why would you expect that?

What happened to OP is a pure chance. CPython's C code doesn't even care about const-consistency. It's flush with dynamic memory allocations, bunch of helper / convenience calls... Even stuff like arithmetic does dynamic memory allocation...

Normally, you don't expect CPython to perform well, not if you have any experience working with it. Whenever you want to improve performance you want to sidestep all the functionality available there.

Also, while Python doesn't have a standard library, since it doesn't have a standard... the library that's distributed with it is mostly written in Python. Of course, some of it comes written in C, but there's also a sizable fraction of that C code that's essentially Python code translated mechanically into C (a good example of this is Python's binary search implementation which was originally written in Python, and later translated into C using Python's C API).

What one would expect is that functionality that is simple to map to operating system functionality has a relatively thin wrapper. I.e. reading files wouldn't require much in terms of binding code because, essentially, it goes straight into the system interface.


Have you ever attempted to write a scripting language that performs better?

I have, several, and it's far from trivial.

The basics are seriously optimized for typical use cases, take a look at the source code for the dict type.


> The basics are seriously optimized for typical use cases, take a look at the source code for the dict type

Python is well micro-optimized, but the broader architecture of the language and especially the CPython implementation did not put much concern into performance, even for a dynamically typed scripting language. For example, in CPython values of built-in types are still allocated as regular objects and passed by reference; this is atrocious for performance and no amount of micro optimization will suffice to completely bridge the performance gap for tasks which stress this aspect of CPython. By contrast, primitive types in Lua (including PUC Lua, the reference, non-JIT implementation) and JavaScript are passed around internally as scalar values, and the languages were designed with this in mind.

Perl is similar to Python in this regard--the language constructs and type systems weren't designed for high primitive operation throughput. Rather, performance considerations were focused on higher level, functional tasks. For example, Perl string objects were designed to support fast concatenation and copy-on-write references, optimizations which pay huge dividends for the tasks for which Perl became popular. Perl can often seem ridiculously fast for naive string munging compared to even compiled languages, yet few people care to defend Perl as a performant language per se.


Raymond Hettinger's talk Modern Python Dictionaries: A confluence of a dozen great ideas is an awesome "history of how we got these optimizations" and a walk through why they are so effective - https://www.youtube.com/watch?v=npw4s1QTmPg


Yeah, I had a nice chat with Raymond Hettinger at a Pycon in Birmingham/UK back in the days (had no idea who he was at the time). He seemed like a dedicated and intelligent person, I'm sure we can thank him for some of that.


> Have you ever attempted to write a scripting language that performs better?

No, because "scripting language" is not a thing.

But, if we are talking about implementing languages, then I worked with many language implementations. The most comparable one that I know fairly well, inside-and-out would be the AVM, i.e. the ActionScript Virtual Machine. It's not well-written either unfortunately.

I've looked at implementations of Lua, Emacs Lisp and Erlang at different times and to various degree. I'm also somewhat familiar with SBCL and ECL, the implementation side. There are different things the authors looked for in these implementations. For example, SBCL emphasizes performance, where ECL emphasizes simplicity and interop with C.

If I had to grade language implementations I've seen, Erlang would absolutely take the cake. It's a very thoughtful and disciplined program where authors went to a great length to design and implement it. CPython is on the lower end of such programs. It's anarchic, very unevenly implemented, you run into comments testifying to the author not knowing what they are doing, what their predecessor did, nor what to do next. Sometimes the code is written from that perspective as well, as in if the author somehow manages to drive themselves in the corner they don't know what the reference count is anymore, they'll just hammer it until they hope all references are dead (well, maybe).

It's the code style that, unfortunately, I associate with proprietary projects where deadlines and cost dictate the quality, where concurrency problems are solved with sleeps, and if that doesn't work, then the sleep delay is doubled. It's not because I specifically hate code being proprietary, but because I meet that kind of code in my day job more than I meet it in hobby open-source projects.

> take a look at the source code for the dict type.

I wrote a Protobuf parser in C with the intention of exposing its bindings to Python. Dictionaries were a natural choice for the hash-map Protobuf elements. I benchmarked my implementation against C++ (Google's) implementation only to discover that std::map wins against Python's dictionary by a landslide.

Maybe Python's dict isn't as bad as most of the rest of the interpreter, but being the best of the worst still doesn't make it good.


Except it is, because everyone knows sort of what it means, an interpreted language that prioritizes convenience over performance; Perl/Python/Ruby/Lua/PHP/etc.

SBCL is definitely a different beast.

I would expect Emacs Lisp & Lua to be more similar.

Erlang had plenty more funding and stricter requirements.

C++'s std::map has most likely gotten even more attention than Python's dict, but I'm not sure from your comment if you're including Python's VM dispatch in that comparison.

What are you trying to prove here?


(std::map is famously rubbish, to the extent that a common code review fix is to replace it with std::unordered_map. Map is a tree, unordered map is a linked-list-collision hashtable. Both are something of a performance embarrassment for C++. So std::map outperforming a given hashtable is a strongly negative judgement)


If I may hazard a guess (I don't know why the original code used it), Python dictionaries are also ordered (and there's no option in Python's library to have them not ordered). Maybe they wanted to match Python's behavior.


Python dicts are ordered in a slightly weird way though, and only since 3.6 (before that it leaked the hashtable implementation scheme). Std::map orders things by some boolean comparator (and duly goes wrong if you give it a partial order comparison), binary tree style. Python currently orders by insertion order, very like storing a list of the keys as well as a hash table, so it can iterate over it in insertion order. That's definitely slower than not maintaining that order but it seems popular with python developers.


   ordered in a slightly weird way
Do you mean "insertion ordered"? That means the order of iteration is guaranteed to match insertion order. C++'s std::map is ordered by key (less than comparison) to create a binary search tree. So iteration order will always be ordered by key value. C++'s std::unordered_map has no ordering guarantees (that I know). I don't think the standard C++ template library has the equivalent of a modern Python dict, nor Java LinkedHashMap. Does anyone know if that is incorrect?


It's ordered and predictable, which is far from rubbish.

In most cases std::unordered_map will be faster, but hashtables have nasty edge cases and are usually more expensive to create.

I can pretty much guarantee it's been optimized to hell and back.


Unordered map sure hasn't been. There are algorithmic performance guarantees in the standard that force the linked list of buckets implementation despite that being slower than alternatives. Maybe the libc++ map<> is a very perfect rbtree, but I doubt that's the state of the art in ordered containers either.


> interpreted language

There is no such thing as interpreted language. A language implementation can be called an interpreter to emphasize the reliance on rich existing library, but there's no real line here that can divide languages into two non-ambiguous categories. So... is C an "interpreted language"? -- well, under certain light it is, since it calls into libc for a lot of functionality, therefor libc can be thought of as its interpreter. Similarly, machine code is often said to be interpreted by the CPU, when it translates it to microcode and so on.

> prioritizes convenience over performance

This has nothing to do with scripting. When the word "scripting" is used, it's about the ability to automate another program, and record this automation as a "script". Again, this is not an absolute metric that can divide all languages or their implementations into scripting and not-scripting. When the word "scripting" is used properly it is used to emphasize the fact that a particular program is amenable to automation by means of writing other programs, possibly in another language.

Here are some fun examples to consider. For example, MSBuild, a program written in C# AFAIK, can be scripted in C# to compile C# programs! qBittorrent, a program written in Python can be scripted using any language that has Selenium bindings because qBittorrent uses Qt for the GUI stuff and Qt can be automated using Selenium. Adobe Photoshop (used to be, not sure about now) can be scripted in JavaScript.

To give you some examples which make your claim ridiculously wrong: Forth used to be used in Solaris bootloader to automate kernel loading progress, i.e. it was used as a scripting language for that purpose, however most mature Forth implementations are aiming for the same performance bracket as C. You'd be also hard-pressed to find a lot of people who think that Forth is a very convenient language... (I do believe it's fine, but there may be another five or so people who believe it too).

---

Basically, your ideas about programming language taxonomies are all wrong and broken... sorry. Not only you misapplied the labels, you don't even have any good labels to begin with.

Anyways,

> What are you trying to prove here?

Where's here? Do you mean the original comment or the one that mentions std::map?

If the former: I'm trying to prove that CPython is a dumpster fire of a program. That is based on many years of working with it and quite extensive knowledge of its internals of which I already provided examples of.

If it is the later: parent claimed something about how optimized Python's dictionary is, I showed that it has a very long way to go to be in the category of good performers. I.e. optimizing something, no matter how much, doesn't mean that it works well.

I don't know what do you mean by Python's VM dispatch in this context. I already explained that I used Python C API for dictionaries, namely this: https://docs.python.org/3/c-api/dict.html . It's very easy to find equivalent functionality in std::map.


For starters, since everything in Python is a pass-by-ref object, dicts store pointers to values, which then have to be allocated on the heap and refcounted, whereas std::map can store values directly. But this is the consequence of a very-high-level object model used by CPython, not its dict implementation that has to adapt to that.


You are very confused between how something works right now and how it can work in principle. In this you very much resemble CPython developers: they never attempt optimizations that go beyond what Python C API can offer. This is very limiting (and, this is why all sorts of Python JIT compilers can in many circumstances beat CPython by a lot).

The evidence to how absurd your claim is is right in front of you: Google's implementation of Protobuf uses std::map for dictionaries, and these dictionaries are exposed to Python. But, following your argument this... shouldn't be possible?

To better understand the difference: Python dictionary stores references to Python objects, but it doesn't have to. It could, for example, take Python strings and use C character arrays for storage, and then upon querying the dictionary convert them back to Python str objects. Similarly with integers for example etc.

Why is this not done -- I don't know. Knowing how many other things are done in Python, I'd suspect that this isn't done because nobody bothered to do it. It also feels too hard and to unrewarding to patch a single class of objects, even as popular as dictionaries. If you go for this kind of optimizations, you want it to be systematically and uniformly applied to all the code... and that's, I guess, how Cython came to be, for example.


Cython is pretty much Python without bytecode interpreter, translated to C instead, but retaining the object model. That's why it's so slow.

And the reason why the object model is the way it is, is because it's an entrenched part of the Python ABI. Sure, if you break that, you can do things a lot faster - this isn't news, people have been doing this with projects like Jython and IronPython that can work a lot faster. But the existing ecosystem of packages is so centered on CPython that this approach has proven to be self-defeating - you end up with a Python implementation that very few people actually use.

So, no, it's not because people are "very confused" or "nobody bothered to do it". It's because compatibility matters.


Well, again, you are confused... and, most likely the CPython developers didn't bother.

No. You don't need the Python object model when implementing Python dictionary. You have evidence right in front of you: std::map bindings are successfully used in its place.

Why even keep arguing about this?

In fact, you can implement your own dictionary, and if you expose all the same mapping protocol, it will work the same as the built-in one. Do you have to use Python objects for this? -- absolutely no. You can convert at the interface boundary. Experience shows that this works noticeably better than using Python objects all the way. Why did the original CPython developers not do it? -- I don't know, can only guess. I already wrote what my guess is. And, in all sincerity, CPython has a lot more and a lot worse problems. Compared to the rest of the codebase, the dictionary object is fine. So, if anyone would seriously consider improving CPython's performance they wouldn't touch dictionaries, at least not at first.


You don't need a Python object model when implementing a highly specialized dictionary that can only store very specific data types. But Python dict is a generic collection type that is designed to store any Python object (or rather, reference to such, since Python has reference semantics for anything).

And this part:

> if anyone would seriously consider improving CPython's performance they wouldn't touch dictionaries, at least not at first.

is just straight up nonsense, given how many times over Python's history dicts have been substantially rewritten. As it happens, I work on Python dev tooling, and the CPython team changing internal data structures for perf reasons has been a recurring headache for me, so I know full well what I'm talking about here.


> Have you ever attempted to write a scripting language that performs better?

Way to miss the mark. The point is precisely that Python is slow and one of the causes is that it is a scripting language. Stomping your foot and essentially: "You couldn't do any better" helps no one and is counterproductive.


Thanks for the comments. I have fixed the headers :)


The premise is that any time you say "Python [...] faster than Rust [...]" you get page views even if it's not true. People have noticed after the last few dozen times something like this was posted.


This is the answer. The thread is chasing various smart-people opinions about languages, interpreters, system calls. We got tricked into click bait title and are using the opportunity to rehash our favorite topics and biases.

On the other hand… so what? It’s kind of fun.


The article itself is a great read and it has fascinating info related to this issue.

However I am more interested/concerned about another part. How the issue is reported/recorded and how the communications are handled.

Reporting is done over discord, which is a proprietary environment which is not indexed, or searchable. Will not be archived.

Communications and deliberations are done over discord and telegram, which is probably worse than discord in this context.

This blog post and the github repository is the lingering remains of them. If Xuanwo did not blog this. It would be lost in timeline.

Isn't this fascinating?


Yes, they are proprietary, which is not great. But I don't buy the allegation that they are not indexed or searchable. There are very few IMs that provide builtin publicly accessable log indexed or searchable by default. Does every IRC server come with public log? What about Matrix groups? How do discussion there not get lost in timeline?

You can provide public log of them not because they are not proprietary, but that they have API to allow logging. Telegram also has such API, and FWIW our discussion group does have searchable log that you can access here: https://luoxu-web.vercel.app/#g=1264662201 It is not indexable publicly more for privacy concern, again not because the platform is proprietary.


This is not a way to have bug discussions, or record them. Do you really think I could find this information on a search for a similar issue?

Only thing that makes this bug and the process of the debug visible is this blog post.

Another point is I don't think IRC or any instant messaging app is the correct place for this kinds of discussions. Unless important points are logged to some bug reporting tool, or perhaps a mailing list, or to a blog post like this one, they are useless for historic purposes.


So there is nothing about being proprietary, but just about using IM?

I don't fully agree. IMs are a great place to discuss issues in a semi-synchronous way. Telecon or face-to-face meetings are sometimes better in velocity, but IMs have some edge on bringing random people happen to be online into the discussion. And it can also bring a different audience into the issue than bug reporting tools or mailing list.

When this issue was brought into the group, it just took several hours for curious people there to collaboratively find the conclusion. This is something unlikely to happen in any other form of discussions based on my experience.

But I agree that group chat is not a great way to record it, and that's why the findings are recorded on the GitHub issue, and group members also encouraged the author to write this up. Then it got posted on HN and on /r/rust by two different group members as well. (The author's initial posting on HN was mysteriously taken down, so the op here helped posting it again.)


Oh I missed the github issue. My bad. In fact I searched github and probably my github search foo was not good. Sorry to disregard that fact.

If these are already in place I don't have any reservations against using IMs or Discord. In fact this is particularly great sample how this can be done.

- Bug report is in place - Blog/Article for historic events , documentation and pointers for related info - Fast communication for debug sessions

I hope you understand my original message was about pointing out a situation where only left overs are history of these chat tools.

There are lots of communities right now just using discord or IM for support, bug reporting or development purposes.


> Reporting is done over discord, which is a proprietary environment which is not indexed, or searchable. Will not be archived.

That's why I don't accept the response "but there's Discord now" whenever I moan about USENET's demise. Back in the days before it, every post was nicely searchable by DejaNews (later Google).

We need to get back to open standards for important communications (e.g. all open source projects that are important to the Internet/WWW stack and core programming and libraries).


Most interesting article I've read this week. Excellent write-up.


So the obvious thing to do... Send a patch to change the "copy_user_generic" kernel method to use a different memory copying implementation when the CPU is detected to be a bad one and the memory alignment is one that triggers the slowness bug...


Not obvious. Seems like if it can be corrected with microcode just have people use updated microcode rather than litter the kernel with fixes that are effectively patchable software problems.

The accepted fix would not be trivial to anyone not already experienced with the kernel. But more important, it obviously isn’t obvious what is the right way to enable the workaround. The best way is to probably measure at boot time, otherwise how do you know which models and steppings are affected.


I don't think AMD does microcode updates for performance issues do they? I thought it was strictly correctness or security issues.

If the vendor won't patch it, then a workaround is the next best thing. There shouldn't be many - that's why all copying code is in just a handful of functions.


A significant performance degradation due to normal use of the instruction (FSRM) not otherwise documented is a correctness problem. Especially considering that the workaround is to avoid using the CPU feature in many cases. People pay for this CPU feature now they need kernel tooling to warn them when they fallback to some slower workaround because of an alignment issue way up the stack.


If AMD has a performance issue and doesn't fix it, AMD should pay the negative publicity costs rather than kernel and library authors adding exceptions. IMHO.


It’s not a trivial fix. Besides the fix likely being in microcode (where AMD figures out why aliasing is broke for addresses that are close to page-aligned), even a software mitigation would be complex because the kernel cannot actually use vector instructions that are typically used for the fallback path when ERMS is not available.


jemalloc was Rust's default allocator till 2018.

https://internals.rust-lang.org/t/jemalloc-was-just-removed-...


> Rust developers might consider switching to jemallocator for improved performance

I am curious if this is something that everyone can do to get free performance or if there are caveats. Can C codebases benefit from this too? Is this performance that is simply left on table currently?


Be aware `jemalloc` will make you suffer the observability issues of `MADV_FREE`. `htop` will no longer show the truth about how much memory is in use.

* https://github.com/jemalloc/jemalloc/issues/387#issuecomment...

* https://gitlab.haskell.org/ghc/ghc/-/issues/17411

Apparently now `jemalloc` will call `MADV_DONTNEED` 10 seconds after `MADV_FREE`: https://github.com/JuliaLang/julia/issues/51086#issuecomment...

So while this "fixes" the issue, it'll introduce a confusing time delay between you freeing the memory and you observing that in `htop`.

But according to https://jemalloc.net/jemalloc.3.html you can set `opt.muzzy_decay_ms = 0` to remove the delay.

Still, the musl author has some reservations against making `jemalloc` the default:

https://www.openwall.com/lists/musl/2018/04/23/2

> It's got serious bloat problems, problems with undermining ASLR, and is optimized pretty much only for being as fast as possible without caring how much memory you use.

With the above-mentioned tunables, this should be mitigated to some extent, but the general "theme" (focusing on e.g. performance vs memory usage) will likely still mean "it's a tradeoff" or "it's no tradeoff, but only if you set tunables to what you need".


Note that glibc has a similar problem in multithreaded contexts. It strands unused memory in thread-local pools, which grows your memory usage over time like a memory leak. We got lower memory usage that didn't grow over time by switching to jemalloc.

Example of this: https://github.com/prestodb/presto/issues/8993


The musl remark is funny, because jemalloc's use of pretty fine-grained arenas sometimes leads to better memory utilisation through reduced fragmentation. For instance Aerospike couldn't fit in available memory under (admittedly old) glibc, and jemalloc fixed the issue: http://highscalability.com/blog/2015/3/17/in-memory-computin...

And this is not a one-off: https://hackernoon.com/reducing-rails-memory-use-on-amazon-l... https://engineering.linkedin.com/blog/2021/taming-memory-fra...

jemalloc also has extensive observability / debugging capabilities, which can provide a useful global view of the system, it's been used to debug memleaks in JNI-bridge code: https://www.evanjones.ca/java-native-leak-bug.html https://technology.blog.gov.uk/2015/12/11/using-jemalloc-to-...


Yes, almost everybody who looks at memory usage in production will eventually discover glibc's memory fragmentation issues. This is how I learned about this topic.

Setting the env var MALLOC_MMAP_THRESHOLD_=65536 usually solves these problems instantaneously.

Most programmers seem to not bother to understand what is going on (thus arriving at the above solution) but follow "we switched to jemalloc and it fixes the issue".

(I have no opinion yet on whether jemalloc is better or worse than glibc malloc. Both have tunables, and will create problematic corner cases if the tunables are not set accordingly. The fact that jemalloc has /more/ tunables, and more observability / debugging features, seems like a pro point for those that read the documentation. For users that "just want low memory usage", both libraries' defaults look bad, and the musl attitude seems like the best default, since OOM will cause a crash vs just having the program be some percent slower.)


Aiming to please people who panic about their RSS numbers seems... misguided? It seems like worrying about RAM being "used" as file cache[0].

If you want to gauge whether your system is memory-limited look at the PSI metrics instead.

[0] https://www.linuxatemyram.com/


Those are not the same.

You can see cache usage in htop; it has a different colour.

With MADV_FREE, it looks like the process is still using the memory.

That sucks: If you have some server that's slow, you want to SSH into a server and see how much memory each process takes. That's a basic, and good, observability workflow. Memory leaks exist, and tools should show them easily.

The point of RES is to show resident memory, not something else.

If you change htop to show the correct memory, that'd fix the issue of course.


Well, RES is resident in physical memory. It just is marked so that the kernel can make reclaim it when it needs to. But until then it's resident. If you want to track leaks you need resident-and-in-use metric which may be more difficult to come by (probably requires scanning smaps?).

It's a case of people using the subtly wrong metrics and then trying to optimize tools chasing that metric rather than improving their metrics. That's what I'm calling misguided.


Not that I would recommend using jemalloc by default but it’s definitely going to be better than musl’s allocator ;)


Thank you! That was very thorough! I will be reading the links. :)



I think it's pretty much free performance that's being left on the table. There's slight cost to binary size. And it may not perform better in absolutely all circumstances (but it will in almost all).

Rust used to use jemalloc by default but switched as people found this surprising as the default.


Switching to non-default allocator does not always brings performance boost. It really depend on your workload, which requires profiling and benchmarking. But C/C++/Rust and other lower level languages should all at least be able to choose from these allocators. One caveat is binary size. Custom allocator does add more bytes to executable.


I don’t know why people still look to jemalloc. Mimalloc outperforms the standard allocator on nearly every single benchmark. Glibc’s allocator & jemalloc both are long in the tooth & don’t actually perform as well as state of the art allocators. I wish Rust would switch to mimalloc or the latest tcmalloc (not the one in gperftools).


> I wish Rust would switch to mimalloc or the latest tcmalloc (not the one in gperftools).

That's nonsensical. Rust uses the system allocators for reliability, compatibility, binary bloat, maintenance burden, ..., not because they're good (they were not when Rust switched away from jemalloc, and they aren't now).

If you want to use mimalloc in your rust programs, you can just set it as global allocator same as jemalloc, that takes all of three lines: https://github.com/purpleprotocol/mimalloc_rust#usage

If you want the rust compiler to link against mimilloc rather than jemalloc, feel free to test it out and open an issue, but maybe take a gander at the previous attempt: https://github.com/rust-lang/rust/pull/103944 which died for the exact same reason the the one before that (https://github.com/rust-lang/rust/pull/92249) did: unacceptable regression of max-rss.


I know it’s easy to change but the arguments for using glibc’s allocator are less clear to me:

1. Reliability - how is an alternate allocator less reliable? Seems like a FUD-based argument. Unless by reliability you mean performance in which case yes - jemalloc isn’t reliably faster than standard allocators, but mimalloc is.

2. Compatibility - again sounds like a FUD argument. How is compatibility reduced by swapping out the allocator? You don’t even have to do it on all systems if you want. Glibc is just unequivocally bad.

3. Binary bloat - This one is maybe an OK argument although I don’t know what size difference we’re talking about for mimalloc. Also, most people aren’t writing hello world applications so the default should probably be for a good allocator. I’d also note that having a dependency of the std runtime on glibc in the first place likely bloats your binary more than the specific allocator selected.

4. Maintenance burden - I don’t really buy this argument. In both cases you’re relying on a 3rd party to maintain the code.


> I know it’s easy to change but the arguments for using glibc’s allocator are less clear to me:

You can find them at the original motivation for removing jemalloc, 7 years ago: https://github.com/rust-lang/rust/issues/36963

Also it's not "glibc's allocator", it's the system allocator. If you're unhappy with glibc's, get that replaced.

> 1. Reliability - how is an alternate allocator less reliable?

Jemalloc had to be disabled on various platforms and architectures, there is no reason to think mimalloc or tcmalloc are any different.

The system allocator, while shit, is always there and functional, the project does not have to curate its availability across platforms.

> 2. Compatibility - again sounds like a FUD argument. How is compatibility reduced by swapping out the allocator?

It makes interactions with anything which does use the system allocator worse, and almost certainly fails to interact correctly with some of the more specialised system facilities (e.g. malloc.conf) or tooling (in rust, jemalloc as shipped did not work with valgrind).

> Also, most people aren’t writing hello world applications

Most people aren't writing applications bound on allocation throughput either

> so the default should probably be for a good allocator.

Probably not, no.

> I’d also note that having a dependency of the std runtime on glibc in the first place likely bloats your binary more than the specific allocator selected.

That makes no sense whatsoever. The libc is the system's and dynamically linked. And changing allocator does not magically unlink it.

> 4. Maintenance burden - I don’t really buy this argument.

It doesn't matter that you don't buy it. Having to ship, resync, debug, and curate (cf (1)) an allocator is a maintenance burden. With a system allocator, all the project does is ensure it calls the system allocators correctly, the rest is out of its purview.


The reason the reliability & compatibility arguments don’t make sense to me is that jemalloc is still in use for rustc (again - not sure why they haven’t switched to mimalloc) which has all the same platform requirements as the standard library. There’s also no reason an alternate allocator can’t be used on Linux specifically because glibc’s allocator is just bad full stop.

> It makes interactions with anything which does use the system allocator worse

That’s a really niche argument. Most people are not doing any of that and malloc.conf is only for people who are tuning the glibc allocator which is a silly thing to do when mimalloc will outperform whatever tuning you do (yes - glibc really is that bad).

> or tooling (in rust, jemalloc as shipped did not work with valgrind)

That’s a fair argument, but it’s not an unsolvable one.

> Most people aren’t writing applications bound on allocation throughput either

You’d be surprised at how big an impact the allocator can make even when you don’t think you’re bound on allocations. There’s also all sorts of other things beyond allocation throughput & glibc sucks at all of them (e.g. freeing memory, behavior in multithreaded programs, fragmentation etc etc).

> The libc is the system’s and dynamically linked. And changing allocator does not magically unlink it

I meant that the dependency on libc at all in the standard library bloats the size of a statically linked executable.


> jemalloc is still in use for rustc (again - not sure why they haven’t switched to mimalloc)

Performance of rustc matters a lot! If the rust compiler runs faster when using mimalloc, please benchmark & submit a patch to the compiler.


I literally linked two attempts to use mimalloc in rustc just a few comments upthread.


Ah - my mistake; I somehow misread your comment. Pity about the RSS regression.

Personally I have plenty of RAM and I'd happily use more in exchange for a faster compile. Its much cheaper to buy more ram than a faster CPU, but I certainly understand the choice.

With compilers I sometimes wonder if it wouldn't be better to just switch to an arena allocator for the whole compilation job. But it wouldn't surprise me if LLVM allocates way more memory than you'd expect.


Any links to instructions on how to run said benchmarks?


Not to mention that by using the system allocator you get all sorts of things “for free” that the system developers provide for you, wrt observability and standard tooling. This is especially true of the OS and the allocator are shipped by one group rather than being developed independently.


I've never not gotten increased performance by swapping outc the allocator.


Rust used to use jemalloc as the default, but went back to using the system malloc back in 2018-ish[0]. Since Rust now has the GlobalAlloc trait (and the #[global_allocator] attribute), apps can use jemalloc as their allocator if they want. Not sure if there's a way for users to override via LD_PRELOAD or something, though.

It turns out jemalloc isn't always best for every workload and use case. While the system allocator is often far from perfect, it at least has been widely tested as a general-purpose allocator.

[0] https://github.com/rust-lang/rust/issues/36963


Performance is not a one-dimensional scale where programs go from “slow” to “fast”, because there are always other factors at play. jemalloc can be the right fit for some applications but for others another choice might be faster, but it also might be that the choice is slower but better matches their goals (less dirty memory, better observability, certain security guarantees, …)


basically that's why jason wrote it in the first place, but other allocators have caught up since then to some extent. so jemalloc might make your c either slower or faster, you'll have to test to know. it's pretty reliable at being close to the best choice

does tend to use more ram tho


jemalloc and mimalloc are very popular in C and C++ software, yes. There are few drawbacks, and it's really easy to benchmark different allocators against eachother in your particular use case.


You can override the allocator for any app via LD_PRELOAD


I sent this to the right people.


(…at AMD?)


At AMD.


AMD's string store is not like Intel's. Generally, you don't want to use it until you are past the CPU's L2 size (L3 is a victim cache), making ~2k WAY too small. Once past that point, it's profitable to use string store, and should run at "DRAM speed". But it has a high startup cost, hence 256bit vector loads/stores should be used until that threshold is met.


Isn't the high startup cost what FSRM is intended to solve?

> With the new Zen3 CPUs, Fast Short REP MOV (FSRM) is finally added to AMD’s CPU functions analog to Intel’s X86_FEATURE_FSRM. Intel had already introduced this in 2017 with the Ice Lake Client microarchitecture. But now AMD is obviously using this feature to increase the performance of REP MOVSB for short and very short operations. This improvement applies to Intel for string lengths between 1 and 128 bytes and one can assume that AMD’s implementation will look the same for compatibility reasons.

https://www.igorslab.de/en/cracks-on-the-core-3-yet-the-5-gh...


Fast is relative here. These are microcoded instructions, which are generally terrible for latency: microcoded instructions don't get branch prediction benefits, nor OoO benefits (they lock the FE/scheduler while running). Small memcpy/moves are always latency bound, hence even if the HW supports "fast" rep store, you're better off not using them. L2 is wicked fast, and these copies are linear, so prediction will be good.

Note that for rep store to be better it must overcome the cost of the initial latency and then catch up to the 32byte vector copies, which yes generally have not-as-good-perf vs DRAM speed, but they aren't that bad either. Thus for small copies.... just don't use string store.

All this is not even considering non-temporal loads/stores; many larger copies would see better perf by not trashing the L2 cache, since the destination or source is often not inspected right after. String stores don't have a non-temporal option, so this has to be done with vectors.


I'm not sure that your comment is responsive to the original post.

FSRM is fast on Intel, even with single byte strings. AMD claims to support FSRM with recent CPUs but performs poorly on small strings, so code which Just Works on Intel has a performance regression when running on AMD.

Now here you're saying `REP MOVSB` shouldn't be used on AMD with small strings. In that case, AMD CPUs shouldn't advertise FSRM. As long as they're advertising it, it shouldn't perform worse than the alternative.

https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/2030515

https://sourceware.org/bugzilla/show_bug.cgi?id=30994

I'm not a CPU expert so perhaps I'm misinterpreting you and we're talking past each other. If so, please clarify.


Or you leave it as is forcing AMD to fix their shit. "fast string mode" has been strongly hinted as _the_ optimal way over 30 years ago with Pentium Pro, further enforced over 10 years ago with ERMSB and FSRM 4 years ago. AMD get with the program.


rep movsb might have been fast at one point but it definitely was not for a few decades in the middle, where vector stores were the fastest way to implement memcpy. Intel decided that they should probably make it fast again and they have slowly made it competitive with the extensions you’ve mentioned. But for processors that don’t support it, using rep movsb is going to be slow and probably not something you’d want to pick unless you have weird constraints (binary size?)


BTW, I've always thought Python uses way too many syscalls when working with files. Simple code like this uses something like 9 syscalls (shown in the article):

    with open('myfile') as f:
        data = f.read()
I'm not much of a C programmer myself. but I at least reported part of the issue to Python: https://bugs.python.org/issue45944

This is the fastest way to read a file on python that I've found, using only 3-4 syscalls (though os.fstat() doesn't work for some special files kernel files like those in /proc/ and /dev/):

    def read_file(path: str, size=-1) -> bytes:
        fd = os.open(path, os.O_RDONLY)
        try:
            if size == -1:
                size = os.fstat(fd).st_size
            return os.read(fd, size)
        finally:
            os.close(fd)


As you say, the reported size is not necessarily correct so it should only be treated as a hint. And if os.read directly translates to a read syscall then you're also not handling short reads.


Ahh ok so to be correct you have to keep reading until you get an empty read?

Maybe I don’t need to query the file size at all?


querying the file size can be useful to choose the allocation size for a buffer. but yes, you have to keep reading until you get a zero-length read.

https://man7.org/linux/man-pages/man2/read.2.html

> On success, the number of bytes read is returned (zero indicates end of file), [...] It is not an error if this number is smaller than the number of bytes requested


Delightful article. Thank you author for sharing! I felt like I experienced every shock twist in surprise in your journey like I was right there with you all along.


Clickbait headline, but the article is great!


I think there might be a range of where people draw the line between reasonable headlines and clickbait, because I tend to think of clickbait as something where the "answer" to some question is intentionally left out to try to bait people into clicking. For this article, something I'd consider clickbait would be something like "Rust std fs is slower than Python?" without the answer after. More commonly, the headline isn't phrased directly as a question, but instead of saying something like "So-and-so musician loves burritos", it will leave out the main detail and say something like "The meal so-and-so eats before every concert", which is trying to get you to click and have to read through lots of extraneous prose just to find the word "burritos".

Having a hook to get people to want to read the article is reasonable in my opinion; after all, if you could fit every detail in the size of a headline, you wouldn't need an article at all! Clickbait inverts this by _only_ having enough enough substance that you could get all the info in the headline, but instead it leaves out the one detail that's interesting and then pads it with fluff that you're forced to click and read through if you want the answer.


Surprisingly I think this usage of clickbait is totally reasonable because it matches the author's initial thoughts/experiences of "what?! this can't be right..."


A related thing from times when it was common that memory layout artifacts had high impact on sw performance: https://en.wikipedia.org/wiki/Cache_coloring


Why is there need to move memory? Hardware cannot DMA data into non-page-aligned memory? Or Linux doesn't want to load non-aligned data?


The Linux page cache keeps data page-aligned so if you want the data to be unaligned Linux will copy it.


What if I don't want to use cache?


You can use O_DIRECT although that also forces alignment IIRC.


Pull out some RAM sticks.


Extremely well written article! Very surprising outcome.


would be lovely if ${cpu_vendor} would document exactly how FSRM/ERMS/etc are implemented and what the expected behavior is


It is documented; this is a performance bug.


I wonder what other things we can improve by removing spectre mitigations and tuning hugepage, syscall altency, and core affinity


Mitigations did not have a meaningful performance impact here.


Disclaimer: The title has been changed to "Rust std fs slower than Python!? No, it's hardware!" to avoid clickbait. However I'm not able to fix the title in HN.


"Works on contingency? No, money down!"


you can mail hn@ycombinator.com and they can change it for you to whatever.


What's the TLDR on how... hardware performs differently on two software runtimes?


AMD's implementation of `rep movsb` instruction is surprisingly slow when addresses are page aligned. Python's allocator happens to add a 16-byte offset that avoids the hardware quirk/bug.


thank you, upvoted!


One of the very first things in the article is a TLDR section that points you to the conclusion.

> In conclusion, the issue isn't software-related. Python outperforms C/Rust due to an AMD CPU bug.


It is software-related. Just the CPU perform badly on some software instruction.


FSRM is a CPU feature embedded in the microcode (in this instance, amd-ucode) that software such as glibc cannot interact with. I refer to it as hardware because I consider microcode a part of the hardware.


> However, mmap has other uses too. It's commonly used to allocate large regions of memory for applications.

Slack is allocating 1132 GB of virtual memory on my laptop right now. I don't know if they are using mmap but that's 1100 GB more than the physical memory.


That is Chromium doing it, and yes, it is using mmap to create a very large, (almost certainly) contiguous range of memory. Many runtimes do this, because it's useful (on 64-bit systems) to create a ridiculously large virtually mapped address space and then only commit small parts of it over time as needed, because it makes memory allocation simpler in several ways; notably it means you don't have to worry about allocating new address spaces when simply allocating memory, and it means answering things like "Is this a heap object?" is easier.


dolphin emulator has recent example of this: https://dolphin-emu.org/blog/2023/11/25/dolphin-progress-rep...

seems its not without perils on Windows:

"In an ideal world, that would be all we have to say about the new solution. But for Windows users, there's a special quirk. On most operating systems, we can use a special flag to signal that we don't really care if the system has 32 GiB of real memory. Unfortunately, Windows has no convenient way to do this. Dolphin still works fine on Windows computers that have less than 32 GiB of RAM, but if Windows is set to automatically manage the size of the page file, which is the case by default, starting any game in Dolphin will cause the page file to balloon in size. Dolphin isn't actually writing to all this newly allocated space in the page file, so there are no concerns about performance or disk lifetime. Also, Windows won't try to grow the page file beyond the amount of available disk space, and the page file shrinks back to its previous size when you close Dolphin, so for the most part there are no real consequences... "


I’m not sure allocations mean anything practical anymore. I recall OSX allocating ridiculous amounts of virtual memory to stuff but never found OSX or the software to ever feel slow and pagey.


The way I describe mmap these days is to say it allocates address space. This can sometimes be a clearer way of describing it, since the physical memory will only get allocated once you use the memory (maybe never).


But is it not still limited by allocating the RAM + Page/Swap size?


Maybe I'm misunderstanding you but: no, you can allocate terabytes of address space on modern 64-bit Linux on a machine with only 8GB of RAM with overcommit. Try it; you can allocate 2^46 bytes of space (~= 100TB) today, with no problem. There is no limit to the allocation space in an overcommit system; there is only a limit to the actual working set, which is very different.


You can do it without overcommit -- you can just back the mmap with file


I don't think so, but it's difficult to find an actual reference. For sure it does overcommit like crazy. Here's an output from my mac:

% ps aux | sort -k5 -rh | head -1

xxxxxxxx 88273 1.2 0.9 1597482768 316064 ?? S 4:07PM 35:09.71 /Applications/Slack.app/Contents/Frameworks/Slack Helper (Renderer).app/...

Since ps displays vsz column in KiB, 1597482768 corresponds to 1TB+.


I don't know why but this really makes me laugh


Anyone else feeling the frequency illusion with rep movsb?

(https://lock.cmpxchg8b.com/reptar.html)


This is unrelated.


Either the author changed the headline to something less clickbaity in the meantime or you edited it for clickbait Pop_- (in that case: shame on you) - current headline: "Rust std fs slower than Python!? No, it's hardware!"


Based on the /r/rust thread, the author seemed to change the headline based on feedback to make it less clickbait-y


Sorry for the clickbaity title, I have changed it based on others advice.


I disagree that it's clickbait-y. Diving down from Python bindings to ucode is ... not how things usually go. Doubly so, since Python is a very mature runtime, and I'd be inclined to believe they've dug up file-reading Kung Fu not available to the Average Joe.


Thanks for this unexpected, thriller-like read.

I'm impressed by your perseverance, how you follow through with your investigation to the lowest (hardware) level.


The author has updated the title and also contacted me. But unfortunately I'm no longer able to update it so.


Totally unrelated but: this post talks about the bug being first discovered in OpenDAL [1], which seems to be an Apache (Incubator) project to add an abstraction layer for storage over several types of storage backend. What's the point/use case of such an abstraction? Anybody using it?

[1] https://opendal.apache.org/


[flagged]


It's worth reading the article. In this case, it seems to have been a hardware issue - as such, not directly related to Rust, C, or Python, but triggered by an instruction that was only called by some file loading routines. It's a very cool deep dive into debugging these sorts of issues.


Although true that it's great article.

It states that python is faster then c, that is not possible since python is build with c. There could be other reasons such libs or implementation.

Also note that the issue he had was not resolved.

The comment was about that python is seen as slow. But that is not always the case.

Once a dev is able to understand the difference between the python and c parts. Python can be quite performant, and efficient with memory.

But if one would actually create a application that does more then just read a file it will be slow again compared to c and rust.


It's not stating python is faster than c in general. This is just one very specific case where non-page-aligned memeory reading on AMD is involved.


It does make me wonder why pymallov and jemalloc used page aligned memory, but glibc didn't. That is odd. Other questions never answered, why did pyo3 add so much overhead? it was over half the difference between the two.


> It does make me wonder why pymallov and jemalloc used page aligned memory, but glibc didn't.

The root cause is not about page alignment. In fact, all allocators are aligned.

The root cause is AMD CPU didn't implement FSRM correctly while copying data from 0x1000 * n ~ 0x1000 * n + 0x10.

> Other questions never answered, why did pyo3 add so much overhead? it was over half the difference between the two.

OpenDAL Python Binding v0.42 does have many place to improve, like we can alloc the buffer in advance or using `read_buf` into uninit vec. I skipped this part since they are not the root cause.


> It does make me wonder why pymallov and jemalloc used page aligned memory, but glibc didn't. That is odd.

Other way around: with glibc it was page-aligned; with the others, it wasn't.

This weird Zen performance quirk aside, I'd prefer page alignment so that an allocation like this which is a nice multiple of the page size doesn't waste anything (RAM or TLB), with the memory allocator's own bookkeeping in a separate block. Pretty surprising to me that the other allocators do something else.


The context of my initial comment is that python is slow, but can be fast.

From the article.

> In conclusion, the issue isn't software-related. Python outperforms C/Rust due to an AMD CPU bug.


> It states that python is faster then c, that is not possible since python is build with c. There could be other reasons such libs or implementation.

In a really strict sense it's impossible to talk about the speed of languages, since any turing complete language could be implemented in any other. In practice when people say X is faster than Y, they mean in practice as actually used; it's completely possible, for instance, that if you ask a large pool of C programmers to... I dunno, sum ten billion integers, and the same to a large pool of Python programmers, most of the C devs will reach for a `for` loop and most of the Python devs will reach for numpy and get vectorization for free, and if that's the case then it's reasonable to say that Python is faster. Or in the actual case at hand, writing the same(ish) program in Rust and Python on the same hardware does result in the Python version being faster, even though it's a bug from that exact hardware not getting along with something under the hood in the Rust version.


The NumPy library doesn't utilize Python's C layer for its memory management.

Instead, it maintains its own memory space. Consequently, transferring data from the Python environment into NumPy or vice versa is relatively slow.

The process of opening a file and travesing its data within Python relies heavily on the C code behind the scenes, resulting in near-C performance.

However, if one were to write an algorithm along the lines of LeetCode - one that has a time complexity of n*2 - Python's performance will be slower compared to other languages. This difference could range from a factor of one to potentially even a hundred.


In this case it is a hardware bug and in no way attributable to Python being fast.


The bug is the other way around :)


reminds me of HP48 programming, there was sysrpl which was near asm speed, and then user rpl which was slooow


It's the hardware. Of course Rust remains the fastest and safest language and you must rewrite your applications in Rust.


You've been posting like this so frequently as to cross into abusing the forum, so I've banned the account.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


So Python isn't affected by the bug because pymalloc performs better on buggy CPUs than jemalloc or malloc?


It has nothing to do with pymalloc's performance per se.

Rather, the performance issue only occurs when using `rep movsb` on AMD CPUs with certain page/data alignment.

Pymalloc just happens to be using page/data alignment that makes `rep movsb` happy while Rust's default allocator is using alignments that just happen to make `rep movsb` sad.


Clickbait title but interesting article.

This has nothing to do with python or rust


>Rust std fs slower than Python!? No, it's hardware!

>...

>Python features three memory domains, each representing different allocation strategies and optimized for various purposes.

>...

>Rust is slower than Python only on my machine.

if one library performs wildly better than the other in the same test, on the same hardware, how can that not be a software-related problem? sounds like a contradiction.

Maybe should be considered a coding issue and/or feature absent? IMHO it would be expected Rust's std library perform well without making all the users to circumvent the issue manually.

The article is well investigated so I assume the author just want to show the problem existence without creating controversy because other way I can not understand.


The root cause is AMD's bad support for rep movsb (which is a hardware problem). However, python by default has a small offset when reading memories while lower level language (rust and c) does not, which is why python seems to perform better than c/rust. It "accidentally" avoided the hardware problem.


That extra 0x20 (32 byte) offset is the size of the PyBytes object header for anyone wondering; 64 bits each for type object pointer, reference count, base pointer and item count.


Thank you, because I was wondering if some Python developer found the same issue and decided to just implement the offset. It makes much more sense that it just happens to work out that way in Python.


It doesn't seem faster. Seem would imply that it isn't the case. It is faster currently on that setup.

But since python runtime is written in C, the issue can't be Python vs C.


It's obviously not python vs c -- the time difference turns out to be in kernel code (system call) and not user code at all, and the post explicitly constructs a c program that doesn't have the slowdown by adding a memory offset. It just turns up by default in a comparison of python vs c code because python reads have a memory offset by default (for completely unrelated reasons) and analogous c reads don't by default. In principle you could also construct python code that does see this slowdown, it would just be much less likely to show up at random. So the python vs c comp is a total red herring here, it just happened to be what the author noticed and used as a hook to understand the problem.


C is a very wide target. There are plenty of things that one can do “in C” that no human would ever write. For instance, the C code generated by languages like nim and zig that essentially use C as a sort of IR.


That is true, With C allot of possible

> However, python by default has a small offset when reading memories while lower level language (rust and c)

Yet if the runtime is made with C, then that statement is incorrect.


By going through that line of thought, you could also argue that the slow implementation for the slow version in C and Rust is actually implemented in C, as memcpy is on glibc. Hence, Python being faster than Rust would also mean in this case that Python is faster than C.

The point is not that one language is faster than another. The point is that the default way to implement something in a language ended up being surprisingly faster when compared to other languages in this specific scenario due to a performance issue in the hardware.

In other words: on this specific hardware, the default way to do this in Python is faster than the default way to do this in C and Rust. That can be true, as Python does not use C in the default way, it adds an offset! You can change your implementation in any of those languages to make it faster, in this case by just adding an offset, so it doesn't mean that "Python is faster than C or Rust in general".


I recall when Pentium was introduced we were told to avoid rep and write a carefully tuned loop ourselves. To go really fast one could use the FPU to do the loads and stores.

Not too long ago I read in Intel's optimization guidelines that rep was now faster again and should be used.

Seems most of these things needs to be benchmarked on the CPU, as they change "all the time". I've sped up plenty of code by just replacing hand crafted assembly with high-level functional equivalent code.

Of course so-slow-it's-bad is different, however a runtime-determined implementation choice would avoid that as well.


I'm not sure it makes sense to pin this only on AMD.

Whenever you're writing performance-critical software, you need to consider the relevant combinations of hardware + software + workload + configuration.

Sometimes a problem can be created or fixed by adjusting any one / some subset of those details.


If that's a bug that only happens with AMD CPUs, I think that's totally fair.

If we start adding in exceptions at the top of the software stack for individuals failures of specific CPUs/vendors, that seems like a strong regression from where we are today in terms of ergonomics of writing performance-critical software. We can't be writing individual code for each N x M x O x P combination of hardware + software + workload + configuration (even if you can narrow down the "relevant" ones).


> We can't be writing individual code for each N x M x O x P combination of hardware + software + workload + configuration

That is kind of exactly what you would do when optimising for popular platforms.

If this error occurs on an AMD Cpu used by half your users is your response to your user going to be "just buy a different CPU" or are you going to fix it in code and ship a "performance improvement on XYZ platform" update


Nobody said "just buy a different CPU" anywhere in this discussion or the article. And they are pinning the root cause on AMD which is completely fair because they are the source of the issue.

Given that the fix is within the memory allocator, there is already a relatively trivial fix for users who really need it (recompile with jemalloc as the global memory allocator).

For everyone else, it's probably better to wait until AMD reports back with an analysis from their side and either recommends an "official" mitigation or pushes out a microcode update.


The fix is that AMD needs to develop, test and deploy a microcode update for their affected CPUs, and then the problem is truly fixed for everyone, not just the people who have detected the issue and tried to mitigate it.


Yeah, but even if you'd take this on as your responsibility (while it should really be the CPU vendor fixing it), you would like to resolve it much lower in the stack, like the Rust compiler/standard library or LLVM, and not individually in any Rust library that happens to stumble upon that problem.


Well, if Excel would be running at half the speed (or half of LibreOffice Calc!) on half of the machines around here somebody at Redmond would notice, find the hardware bug and work around it.

I guess that in most big companies it suffices that there is a problem with their own software running on the laptop of a C* manager or of somebody close to there. When I was working for a mobile operator the antennas the network division cared about most were the ones close to the home of the CEO. If he could make his test calls with no problems they had the time to fix the problems of the rest of the network in all the country.


You are going to be disappointed when you find out there's lots of architecture and CPU specific code in software libraries and the kernel.


That's completely fine in kernels and low-level libraries, but if I find that in a library as high-level as opendal, I'll definitely mark it down as a code smell.


It's a known issue for AMD and has been tested by multiple people, and by the data provided by the author. It's fair to pin this problem to AMD.


Years ago, Rust's standard library used jemalloc. That decision substantially increased the minimum executable size, though. I didn't publicly complain about it back then (as far as I can recall), but perhaps others did. So the Rust library team switched to using the OS's allocator by default.

Maybe using an alternative allocator only solves the problem by accident and there's another way to solve it intentionally; I don't yet fully understand the problem. My point is that using a different allocator by default was already tried.


> I didn't publicly complain about it back then (as far as I can recall), but perhaps others did. So the Rust library team switched to using the OS's allocator by default.

I've honestly never worked in a domain where binary size ever really mattered beyond maybe invoking `strip` on a binary before deploying it, so I try to keep an open mind. That said, this has always been a topic of discussion around Rust[0], and while I obviously don't have anything against binary sizes being smaller, bugs like this do make me wonder about huge changes like switching the default allocator where we can't really test all of the potential side effects; next time, the unintended consequences might not be worth the tradeoff.

[0]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: