Check out both how Python strings are implemented and the string type’s semantics in the language.
Strings are sequences of bytes only in the sense that everything stored in memory is a sequence of bytes. The semantics matter far more, and they aren’t the same as a sequence of bytes.
Also many languages make strings immutable and byte arrays mutable.
I didn't know that at some point, then I knew that and found it obvious, and now I don't know it again.
Strings are very very not sequences of bytes. Strings are a semantic thing. There may be a sequence of bytes in some representation of a particular string, but even then those bytes are not enough to define a string without other stuff. An encoding, at the very least. But even then, there are many things that could be described as a "string". A sequence of code points, perhaps? Or scalar values? Grapheme clusters?
Not to mention that you may not even have a linear sequence of bytes at the bottom level. You might have a rope (cons cell), or an intern pointer, or...
As a blogger who makes similar assumptions, I think we depend on how a lot of us from that time "grew up" similarly. Sockets came to relevance later in my career compared to everything else listed here.
As someone younger, ports and sockets appeared very early in my learning. I'd say they appeared in passing before programming even, as we had to deal with router issues to get some online games or p2p programs to work.
And conversely, some of the other topics are in the 'completely optional' category. Many of my colleagues work on IDEs from the start, and some may not even have used git in its command line form at all, though I think that extreme is more rare.
>The term socket dates to the publication of RFC 147 in 1971, when it was used in the ARPANET. Most modern implementations of sockets are based on Berkeley sockets (1983), and other stacks such as Winsock (1991).
I read RFC 147 the other day, and it turns out that by "socket" it means "port number", more or less (though maybe they were proposing to also include the host number in the 32-bit "socket", which was quietly dropped within the next few months). Also Berkeley sockets are from about 01979, which is a huge difference from 01983.
I haven't even realized that while I was reading the article, but it is amusing!
Though one explanation is that I think for the other stuff that the writer doesn't explain, one can just guess and be half right, and even if the reader guesses wrong, isn't critical to the bug — but sockets and capabilities are the concepts that are required to understand the post.
It still is amusing and I wouldn't have even realized that until you pointed that out.
I found it interesting that they know how to use strace, but not how to list open files held by a process which to me seems simpler. Again, not criticism just an observation and I enjoyed the article
Given the "(hi Julia!)" immediately after the strace shenanigans, I interpreted this as a third-party hint; the author most likely had not used strace before.
The author is both an example of and an example for how we can get caught in "bubbles" of tools/things we know and use and don't, and blog posts like this are great for discovery (I didn't know about git invoking a binary in the path like his "git re-edit", for example, until today).
I discovered that by accident, I had a script called git-pr that opened a pull request with github using the last commit message and then pushed it to slack for approval. I was trying to rewrite it to add a description and wondered why "git pr" pushed an empty message to slack
All of the things you listed are ops topics. But sockets are a programming concept.
I would expect a person with 10+ years of Unix sysadmin experience — but who has never programmed directly against any OS APIs, “merely” scripting together invocations of userland CLI tools — to have exactly this kind of lopsided knowledge.
(And that pattern is more common than you might think; if you remember installing early SuSE or Slackware on a random beige box, it probably applies to you!)
Years ago I worked on contract for a large blue 3 letter company doing outsourced server management for the fancy credit card company. The incident in question happened before my time on the team but I heard about it first hand from the server admin (let's call him Ben) who had been at the center of it.
The data center in question was (IIRC) 160K sqft of raised floor spread across multiple floors in a major metropolitan downtown area. It isn't there anymore. Windows, Unix, Linux, mainframe, San, all the associated fun stuff.
Ben was working the day after thanksgiving decommissioning a system. Full software and physical decommission. Approved through all the proper change management procedures.
As part of the decommission Ben removed the network cables from under raised floor. Standard snip the connector off and pull it back. Easy. Little did he know that network cable was ever so slightly entangled with another cable. Not enough to give him pause when pulling it though. It wouldn't have been an issue if the other cable had been properly latched in its ports. It wasn't. That little pull ended up pulling the network connection out of a completely unrelated system. A system managed by a completely different group. A system responsible for credit card processing. On USA Black Friday.
Oops. CC processing went down. It took far too long to resolve. Amazingly Ben didnt loose his job. After all he followed all the processes and procedures. Kudos to the management team who kept him protected.
Change management and change freezes were far more stringent by the time I joined the team. There was also now a raised floor infrastructure group and no one pulled a tile without their involvement.
> This computer stuff is amazingly complicated. I don't know how anyone gets anything done.
I wonder what could be done to make this type of problem less hidden and easier to diagnose.
The one thing that comes to mind is to have the loader fail fast. For security reasons, the loader needs to ensure TMPDIR isn't set. Right now it accomplishes this by un-setting TMPDIR, which leads to silent failures. Instead, it could check if TMPDIR is set, and if so, give a fatal error.
This would force you to unset TMPDIR yourself before you run a privileged program, which would be tedious, but at least you'd know it was happening because you'd be the one doing it.
(To be clear, I'm not proposing actually doing this. It would break compatibility. It's just interesting to think about alternative designs.)
Then you'd have to add a wrapper script to su and similar programs that unsets all relevant environment variables. That set is not necessarily fixed; a future version of glibc may well require clearing NSS_FILES_hosts as well.
(This is about UNSECURE_ENVVARS, if someone needs to find the source location.)
Making these things more transparent is a good idea, of course, but it is somewhat hard. Maybe we could add Systemtap probes when environment variables are removed or ignored.
A related issue is that people stick LD_LIBRARY_PATH and LD_PRELOAD settings into shell profiles/login scripts and forget about them, leading to hard-to-diagnose failures. More transparency there would help, but again it's hard to see how to accomplish that.
Mh, I am starting to dislike this kind of hyper-configurability.
I know when this was necessary and used it myself quite a bit. But today, couldn't we just open up a mount namespace and bind-mount something else to /tmp, like SystemDs private tempdirs? (Which broke a lot of assumptions about tmpdirs and caused a bit of ruckus, but on the other hand, I see their point by now)
I'm honestly starting to wonder about a lot of these really weird, prickly and fragile environment variables which cause security vulnerabilities, if low-overhead virtualization and namespacing/containers are available. This would also raise the security floor.
> But today, couldn't we just open up a mount namespace and bind-mount something else to /tmp, like SystemDs private tempdirs?
No, because unless you're already root (in which case you wouldn't have needed the binary with the capability in the first place), you can't make a mount namespace without also making a user namespace, and the counterproductive risk-averse craziness has led to removing unprivileged users' ability to make user namespaces.
It's probably true that there are setuid programs that can be exploited if you run them in a user namespace. You probably need to remove setuid (and setgid) as Plan9 did in order to do this.
It is complex. There was another posting on HN where commenters were musing over why software projects have a much higher failure rate than any other engineering discipline.
Are we just shittier engineers, is it more complex, or is the culture such that we output lower quality? Does building a bridge require less cognitive load then a complex software project?
I think it's a cultural acceptance of lower quality, happily traded for deft execution, over and over.
We're better at encapsulating lower-level complexities in e.g. bridge building than we are at software.
All the complexities of, say, martensite grain boundaries and what-not are implicit in how we use steel to reinforce concrete. But we've got enough of it in a given project that the statistical summaries are adequate. It's a member with thus strength in tension, and thus in compression, and we put a 200% safety factor in and soldier on.
And nobody can take over the ownership of leftpad and suddenly falsify all our assumptions about how steel is supposed to act when we next deploy ibeam.js ...
The most well understood and dependable components of our electronic infrastructure are the ones we cordially loathe because they're composed in shudder COBOL, or CICS transactions, or whatever.
Exactly. The properties rarely matter outside the item. The column is of such-and-such a strength, that's it. But when things get strange we see failures. Perfect example: Challenger. Was the motor safe sitting on the pad? Yes. Was the motor safe in flight? Yes. Was the motor safe at ignition? On the test stand, yes. Stacked for launch, ignition caused the whole stack to twang--and maybe the seals failed....
> Are we just shittier engineers, is it more complex [...]
Both IMO: first, anybody could buy a computer during the last three decades, dabble in programming without learning basic concepts of software construction and/or user-interface design and get a job.
And copying bad libraries was (and is) easy. I still get angry when software tells me "this isn't a valid phone number" when I cut/copy/paste a number with a blank or a hyphen between digits. Or worse, libraries which expect the local part of an email address to only consist of alphanumeric characters and maybe a hyphen.
Second, writing software definitely is more complex than building physical objects. Because there are "no laws" of physics which limit what can be done. In the sense that physics tell you that you need to follow certain rules to get a stable building or a bridge capable of withstanding rain, wind, etc.
Absolutely. As an Electrical Engineer turned software guy, Ohm's/Kirchhoff's laws remain as valid and significant as when I was taught them 35 years ago. For software however, growth of hardware architectures/constraints made it possible to add much more functionality. My first UNIX experience was on PDP-11/44, where every process (and kernel) had access to an impressive maximum of 128K of RAM (if you figured out the flag to split address and data segments). This meant everything was simple and easy to follow: the UNIX permission model (user/group/other+suid/sgid) fit it well. ACLs/capabilities etc were reserved for VMS/Multics, with manuals spanning shelves.
Given hardware available to an average modern Linux box, it is hardly surprising that these bells and whistles were added - someone will find them useful in some scenarios and additional resource is negligible. It does however make understanding the whole beast much, much harder...
There are no big wins left in bridge building, so there is no justification for taking big risks. Also, in most software project failures, the only cost is people's time; no animals are harmed, no irreplaceable antique guitars are smashed, no ecosystems are damaged, and no buses of schoolchildren plunge screaming into an abyss.
Your software startup didn't get funded? Well, you can go back and finish college.
Setting a capability on the perl executable seems like a very bad idea. That effectively grants tha capability to everything that is able to invoke perl (without being restricted to NO_NEW_PRIVILEGES).
Yeah why did he want non-root perl to be able to bind to low-numbered ports? Seems like one of those typical footguns of applying non-standard configurations.
My reading is the author didn't do that, rather his/her employers configuration system had done so.
Setting TMPDIR to /mnt/tmp seems also to come from that.
I would guess both were the result of someone who didn't really know what they were doing trying things until they found something that got what they needed to work, then pushed that out without understanding the broader implications.
Peter Weller, playing Buckaroo Banzai, is late for his military-particle-physics-interdimensional-jet-car test because he's helping Jeff Goldblum's character with neurosurgery. Later that day he will go play lead guitar in an ensemble.
Scriptwriting gurus advise that your protagonist should have flaws and character progression. The writers of this movie disagree.
Kevin Smith has an introduction to this movie where he calls it a true piece of art: "It doesn't care what you bring to the table, it bring itself to the table and says: figure it out". https://youtu.be/N8R8wmlggwc?si=sva2-jF1Kl5eFsU4
A couple of paragraphs in I started wondering if it was going to turn out to be systemd-tmpfiles (in ubuntu, 16.04 I think? The symptom was "about 10 days after login, X11 forwarding over ssh stopped working but local apps could still open windows just fine." I remember it as an ubuntu-specific misconfiguration, though I think the systemd defaults were changed to be less of a footgun in response...)
I was pleased that it was more interesting than that, and I want people to write more twitchy-detail-post-mortems like this :-)
Vibe coded sleeper bugs have always been a thing, they just came from the bosses' nephew who was still learning PHP at the time and left several years ago.
Also, computers in 2015 were not meaningfully less complex than today. Certainly not when the topic is weird emacs and perl interactions.
The problem isn't that AI is doing something new, we all know that it isn't. The problem is that the boss' nephew is becoming the rule now rather than the exception.
Buckaroo Banzai: You can check your anatomy all you want, and even though there may be normal variation, when it comes right down to it, this far inside the head it all looks the same. No, no, no, don’t tug on that. You never know what it might be attached to.
See, this is the point, for me, where it started to look like a problem. You know, I wanted to sacrifice the precentral vein in order to get some exposure, but because of this guy's normal variation, I got excited, and all of a sudden I didn't know whether I was looking at the precentral vein, or one of the internal cerebral veins, or the vein of Galen, or the vascular vein of Rosenthal. So, on my own, to me, at this point, I was ready to say that's it, let's get out.
I have an overly reductive take on this—it’s Unix environment variables.
You have your terminal window and your .bashrc (or equivalent), and that sets a bunch of environment variables. But your GUI runs with, most likely, different environment variables. It sucks.
And here’s my controversial take on things—the “correct” resolution is to reify some higher-level concept of environment. Each process should not have its own separate copy of environment variables. Some things should be handled… ugh, I hate to say it… through RPC to some centralized system like systemd.
I am usually amused by the way really competent people judge other's context.
This post assumes understanding of:
- emacs (what it is, and terminology like buffers)
- strace
- linux directories and "everything is a file"
- environment variables
- grep and similar
- what git is
- the fact that 'git whatever' works to run a custom script if git-whatever exists in the path (this one was a TIL for me!)
- irc
- CVEs
- dynamic loaders
- file priviledges
but then feels important to explain to the audience that:
>A socket is a facility that enables interprocess communication
reply