Didn’t really get the point of the post as it just presents something without a conclusion.
9X% of users do not care about a <1% drop in performance. I suspect we get the same variability just by going from one kernel version to another. The impact from all the Intel mitigations that are now enabled by default is much worse.
However I do care about nice profiles and stack traces without having to jump through hoops.
Asking people to recompile an _entire_ distribution just to get sane defaults is wrong. Those who care about the last drop should build their custom systems as they see fit, and they probably already do.
it does present a conclusion. once the kernel supports .sframe it will be all-around superior to -fomit-frame-pointer, and a better default for distros to use.
This comparison is pretty misleading. An accessibility issue prevents someone from being able to use software effectively. Not having localized text would have a similar impact. A ~1% performance impact on the other hand is the minuscule downside of improving debugging, profiling and error reporting for an entire OS. And that's not just a minority of users, as tons of software will automatically gather stack traces for bug reports.
There's basically no downside to fixing accessibility issues or adding new language translations other than the work involved in doing so. (And yes, maintaining translations over time is hard, but most projects let them lag during development, so they don't directly hold anything back.) There is a rather glaring downside to this performance optimization, whose upside is sometimes entirely within run-to-run variance and can be blown away by almost any other performance tweak. It's clear the optimization has some upsides, but an extra register and saving some trivial loads/stores just isn't as big of a deal on modern processors that are loaded to the gills with huge caches and deep pipelines.
I guess I don't care that much about fomit-frame-pointer in the grand scheme of things, but I think enabling it in distributions was ultimately a mistake. If some software packages benefited enough from it, it could've just been done only for those packages. Doing it across the system is questionable at best...
But does what you care about matter enough to be the default?
Are you the majority?
Evaluate "majority" this way: For every/any random binary in a distro, out of all the currently running instances of that binary in the world at any given moment, how many of those need to be profiled?
There is no way the answer is "most of them".
You have a job where you profile things, and maybe even you profile almost everything you touch. Your whole world has a high quotient of profiling in it. So you want the whole system built for profiling by default. How convenient for you. But your whole world is not the whole world.
But it's not just you, there are, zomg thousands, tens of thousands, maybe even hundreds of thousands of developers and ops admins the same as you.
Yes and? Is even that most installed instances of any given executable?
No way.
Or maybe yes. It's possible. Can you show that somehow? But I will guess no way and not even close.
> Evaluate "majority" this way: For every/any random binary in a distro, out of all the currently running instances of that binary in the world at any given moment, how many of those need to be profiled?
> There is no way the answer is "most of them".
This is an absurd way to evaluate it. All it takes is one savvy user to report a performance problem that developers are able to root-cause using stack traces from the user's system. Suppose they're able to make a 5% performance improvement to the program. Now all user's programs are 5% faster because of the frame pointers on this one user's system.
At this point people usually ask: but couldn't developers have done that on their own systems with debug code? But the performance of debug code is not the same as the performance of shipping code. And not all problems manifest the same on all systems. This is why you need shipping code to be debuggable (or instrumentable or profileable or whatever you want to call it).
I regularly have users run Sysprof and upload it to issues. It's immensely powerful to be able to see what is going on systems which are having issues. I'd argue it's one of the major reasons GNOME performance has gotten so much better in the recent-past.
You can't do that when step one is reinstall another distro and reproduce your problem.
Additionally, the overhead for performance related things that could fall into the 1% range (hint, it's not much) rarely are using the system libraries in such a way anyway that would cause this. They can compile that app with frame-pointers disabled. And for stuff where they do use system libraries (qsort, bsearch, strlen, etc) the frame pointer is negligible to the work being performed. You're margin of error is way larger than the theoretical overhead.
Better analogy: you're paying 30% to apple, and over 50% in bad payday loans, and you're worried about the 3% visa/stripe overhead ... that's kinda crazy. But that's where we are in computer performance, there's 10x, 100x, and even greater inefficiencies everywhere, 1% for better backtraces is nothing.
Absolutely. We've gotten numerous double digit performance improvements across applications, libraries, and system daemons because of frame-pointers in Fedora (and that's just from me).
Performance problems matter to the people who have them, who often are in an inconvenient place. Having the ability for profiling to just work means that it's easy to help these people.
I think you are trying to make this out something that it isn’t.
Visibility at the “cost” of negligible impact is more important than raw performance. That’s it.
I’m a regular user of Linux with some performance sensitivity that does not go as far as “I _need_ that extra register!”. That’s what the majority of developers working on Linux are like. I think it’s up to _you_ to prove the contrary.
> Evaluate "majority" this way: For every/any random binary in a distro, out of all the currently running instances of that binary in the world at any given moment, how many of those need to be profiled?
Most systems need to generate useful crash reports. Even end user systems. What kind of system doesn't need them? How else are developers supposed to reliably address user complaints?
Theoretically, there are alternative ways to generate stacktraces without using frame pointers. The problem is, they're not nearly as ubiquitous and require more work to integrate them in existing applications and workflows. That makes them useless in practice for a large number of cases.
I think it's ridiculous to question that since obviously, yes, many people have decided exactly that. I see no point myself and I'm even in the field. And I am not in charge of all the distributions which disabled it by default.
So, "yes". In fact "yes, duh?" Talk about head in sand...
That strikes me as an insane take (not to mention blatantly inaccurate), but I take your point that this is a common one for distribution-maintainers to have.
> 9X% of users do not care about a <1% drop in performance.
Except Python got opted out of the frame pointer change due to benchmarks showing slowdowns of up to 10%. The discussion around that had the great idea of just adding a pragma to flat out override the build setting. So in the end that "%1" reduction claim only holds if everything even remotely affected silently ignores the flag.
Any link to the fix or documentation about it? I could find added perf support but did not see anything about improved performance related to frame pointer use.
https://pagure.io/fesco/issue/2817#comment-826636 will probably get you started into the relevant paths. Python 3.12 was going to include frame-pointers anyway for perf to boot. So they needed to fix this regardless.
I think he means that the stack is not something that you are forced to work with when programming in assembly. You can put data wherever you want (and are allowed to), and jmp into whatever random memory address you want. You can use CPU instructions that handle stack management for you, but you don’t _have_ to.
Somewhere on my random ideas pile is to write a queue-oriented operating system - you know how we have threads? What if we didn't have threads, just a list of things to do, to run on the next available processor? (Haskell's VM calls them sparks)
It actually is incredibly human-centered, to the point that humans were made in _his_ image and the Sun is supposed to turn _around_ the Earth.
But all this makes sense when you realize it’s a primitive human myth made by primitive people with limited understanding of the universe and the world around them.
Not to get to into the weeds but being made in the image of God does not necessarily involve physical appearance. Also, it has nothing to do with being heliocentric or geocentric.
Well, creation was made for one god, which happens to look just like us and never once mentions other peoples. To be a Christian astronomer you have to believe one of these:
1. God is wasting the vast, vast, vast majority of the universe on emptiness while he focuses on his fave planet.
2. The universe is full of humans, in which case Jesus is presumably getting re-crucified every few seconds to absolve new groups. Or I guess maybe he split up into a trillion copies that all got crucified at once? Or we’re the 1-in-a-trillion lucky ones that everyone else just gets to hear about?
3. The rest of the universe has aliens because god got bored/wanted things for us to play with, as his super special favorite species. The aliens don’t get to look like god, ofc.
No offense intended to anyone, but I don’t see how you could possibly accept Christian doctrine without necessarily thinking of earth as unfathomably special.
Oh I do think it’s special. But thinking of the universe as a waste because there are lots of uninhabited planets is pretty human focused. Why would it be a waste for an infinite god with infinite amount of attention to spend time creating a large universe?
How arrogant for anyone to look at these incredible pictures and think they know ANYTHING at all. We may be the center of it all, we may not, this may be a massive simulation, or a massive random accident. The only correct answer is to admit we know nothing. Humans are so fixated on knowing everything.
I think this line of reasoning does a disservice to all the scientists and thinkers who contributed to a considerable amount of knowledge. We learned so much about the universe in the past 100 years, it's impudent to call this nothing just because it's not everything.
I'm talking about filling in the blanks in things that are not yet known, not discounting all scientific progress. The point was that the person I replied to held just as irrational a belief as the person they were criticizing. Neither of them knows the deeper nature of reality and whether humanity was created by something or is just an accident of nature, so both of their replies are absurd as one another's.
To be fair, as a christian I don’t believe in goblins and unicorns. I do believe in something though. I suppose you do too, and in the end our core beliefs might not even be that different.
But that’s just where you happen to draw the line and is not really relevant. You do believe in whatever your bestiary happens to contain, like magical burning bushes, giants, and super-strong men with magical hair. Whether a goblin is in there or not is an implementation detail.
I haven’t read Endurance (it’s on the list!) but was deeply impressed by this quote from a fellow explorer:
“For scientific discovery, give me Scott; for speed and efficiency of travel, give me Amundsen; but when you are in a hopeless situation, when you are seeing no way out, get down on your knees and pray for Shackleton.”
I wanted to like awk and tried really hard but in the end was disappointed by what I see as unnecessary complications or limitations in the language. For example, it has first class support for regexes but only for matching. You can’t do ‘s/foo/bar’. I also found string manipulation to be cumbersome with the string functions. I would have expected a string processing language to have better primitives for this. And function arguments/variables are just a mess, it’s hard to understand how they came up with that design. It’s also quirky and unintuitive in some places you would not expect. Take the non-working example from the article:
awk -v FS=';' -v OFS=',' 1
I expect this to change the change the separator in the output. Period. The “efficiency” argument for why it doesn’t work just doesn’t cut it for me. First, it’s very simple to do a one time comparison of FS and OFS, if they are different then you know you know you _have_ to perform the change, because the user is asking you! If I do this in reality and it doesn’t work I just switch over to sed or perl and call it a day.
All in all, perl -eP is a better awk. And for data processing I switched to miller. It has it’s idiosyncrasies as well but it’s much better for working with structured records.
Damn those far-too-common-word names! Betcha if I googled that, I'd get page after page of results pointing to actor Johnny Miller... Any link?
[EDIT:] Naah, sorry, just had to search for "miller language" (and remove the superfluous "academy" from "miller language academy" that Google "helpfully" added), and found: https://miller.readthedocs.io/en/latest/miller-programming-l... . Thanks, interesting!
You are the captain and pilot of the ferry. And it is such a complicated ferry that you are extensively trained on how to navigate it. It is, in fact, so complicated and different that there are other ferries around but you can only sail yours. You can't just hop on another one and do the same trip.
You took this ferry to an island in the middle of nowhere and after you got there you realized the ferry was broken. Nobody knows how bad... it might snap in half in the middle of the return trip.
You have plenty of provisions for the next few months and you are not alone on the island. Other ferries still come and go but you can't just hop on those, you don't know how to operate them.
They sent one of those other ferries just for you with a smaller crew to accommodate you. Without it you are not coming back.
This almost reads like science fiction, what an incredible attack from a technical POV. A couple of thoughts:
1. The beepers were compromised and have been for a long time. I don't know how easy it is to exfiltrate data from them if they are receive-only devices. At any rate it shows that Israel is capable of intercepting and manipulating low-tech comms. What's left for Hezbollah to use?
2. The next step is to hack into hospital record systems and get a list of all patients admitted today.
> Hezbollah recently upgraded their pagers with the American University of Beirut
Please do not conflate Hezbollah and AUB.
The claim is that AUB medical school pagers were replaced a week or so ago. This is either pure coincidence, false or fake news to imply that AUB has Israeli operatives, or indeed that the pagers used were compromised and that the USA was aware of the impending attack and did not want to harm AUB medical staff - who probably are mostly not connected with Hezbollah.
I wasn't thinking years, but months but I didn't know about the recent upgrade. At any rate, if the pagers allowed any data exfiltration they have been collecting that data since whenever the last upgrade was.
The reason for using pagers instead of phones is that they are receivers only, they do not transmit, therefore they cannot be localized.
So no data exfiltration was possible using the pagers. The only purpose of the modified pagers was to maim or kill their possessors, by detonating all of them simultaneously.
It is still way more difficult to localize a local oscillator (especially one that's trying even slightly to shield itself) than something trying to transmit to a tower a few miles away.
This makes an assumption that Israel cares about making it's list of targets as small as possible. Israel has shown over and over again that it is happy casting a very wide net when labelling people as legitimate targets, using fuzzy machine learning to label large numbers of people without any direct evidence. Israel also has a higher acceptance of the deaths of innocents than any other western aligned & supplied nation, literally happy if dozens or more civilians are killed in order to kill one member of Hamas or Hezbollah. They or their proxies then blame their opponents for the deaths of innocents, ignoring that the accepted rules of war are that civilian deaths should be minimized.
For sources, search for "Israel Lavender system" and pick your media source of choice.
I think the value is in knowing the network and cross-reference against it. Innocent bystanders or people who happened to just go to the hospital today will probably fall off during this process. Not to mention that you can filter out by the type of injury to get a more accurate list.
Yikes the "list of patients" thing is scariest part of all of this: if they feed that into some monster AI that creates new targets... I can only imagine the diminishing accuracy of who is really deserving of being targets
But that is exactly what modern AI-era big data warfare would look like. By its nature, and by choice, less accuracy / more innocent targets, but oh well
I heard a story according to which a similar scheme was used in the Algerian war of independence, so these is more retro than scifi. Radio sets were left by the French military for the Algerians to grab. The explosive was hidden in the frame and components, and the trigger was a specific audio frequency. The device looked exactly like a stock military radio set if you disassembled it. So it's not that scifi.
9X% of users do not care about a <1% drop in performance. I suspect we get the same variability just by going from one kernel version to another. The impact from all the Intel mitigations that are now enabled by default is much worse.
However I do care about nice profiles and stack traces without having to jump through hoops.
Asking people to recompile an _entire_ distribution just to get sane defaults is wrong. Those who care about the last drop should build their custom systems as they see fit, and they probably already do.
reply