I grew up on 68k Macs so DOS was never something I thought much about aside from the occasional boot disk to run some firmware procedure later on when later Windows was well established.
Then later from a retrocomputing standpoint, I've come to see it is pretty fascinating:
1) The sheer volume of commercial software.. which is readily available on winworld, vetusware, and archive.org. A lot of it with sometimes awesome character-mode UIs (Borland's early IDEs are really spectacular, Lotus 1-2-3, and WordPerfect are still taken seriously by some users).
2) The memory model is quixotic and an interesting homage to the chaotic evolution of x86 that most later operating systems elide by requiring a 386. The 286 and 386 have drastically different protection schemes. EMS and XMS. The eventual DOS extenders and standards like VCPI, DPMI. It's honestly a mess but somehow interesting to see how people solved difficult problems.
If anything, a lot of the modern developer experience has suffered compared to the early Borland IDEs. One would easily say we regressed.
They were focused, immediate and effective.
If anything today you'd miss the code navigation features (go to definition, go back, go forward), and of course LSP is actually very useful and once you don't have it, it hurts (instant errors, ease of refactoring...)
Give me something like the Borland IDEs (FAST!) and some of the modern features (they can be slower, they're only as fast as the LSP server implementation anyway) and I'm there!
I did a proof of concept quickly, mostly while learning to write code editors, but I have not gotten it to a point of being useful [1]
The Free Pascal software distribution includes a FLOSS look-alike of Borland's character-mode IDE for Pascal. If you can track down RHIDE, that's a similar look-alike IDE that runs in MS-DOS (it does require 386+ since it uses a DOS extender) and compiles C/C++ using gcc. (One version of it is distributed as part of the FreeDOS "development" packages.) It would be nice to recreate a broadly similar look and featureset starting from a modern text-mode editor such as the newly released MS-EDIT, aiming for modern IDE infrastructure like LSP and DAP. Such a project may find quite some use for, e.g. remote system administration tasks using ssh.
This article is stuck in a very wide valley of being perhaps somewhat familiar with the domain concepts but nowhere near deep enough to draw any of the conclusions being drawn. It is close enough to being completely wrong.
The primary tradeoff of initcwnd is setting a reasonable window before you've learned anything about the path. BBR has little say on this because it takes, in relative terms, quite a while to go through its phases. An early BBR session is therefore not really superior to other congestion controls because that is not the problem it is really focused on.
Jacking up the initcwnd, you start to risk tail loss, which is the worst kind of loss for a sliding window.. especially in the primordial connection. There are ways of trying to deal with all that but they are loss predictions.
If you are a big enough operator, maybe you have some a priori knowledge to jack this up for certain situations. But people are also reckless and do not understand the tradeoffs or overall fairness that the transport community tries to achieve.
As other comments have pointed out, QUIC stacks also replicate congestion control and other algorithms based on the TCP RFCs. These are usually much simpler and lacking features compared to the mainline Linux TCP stack. It's not a free lunch and doesn't obviate the problem space any transport protocol has to make tradeoffs on.
Google has probably sent data to almost every /24 in the last hour. Probably 99% of their egress data goes to destinations where they've sent enough data recently to make a good estimate of bottleneck link speed and queue size.
Having to pick a particular initcwnd to be used for every new TCP connection is an architectural limitation. If they could collect data about each destination and start each TCP connection with a congestion window based on the recent history of transfers from any of their servers to that destination, it could be much better.
It's not a trivial problem to collect bandwidth and buffer size estimates and provide them to every server without delaying the connection, but it would be fun to build such a system.
> It's not a trivial problem to collect bandwidth and buffer size estimates and provide them to every server without delaying the connection, but it would be fun to build such a system.
Tons of fun. Sadly, I don't have access to enough clients to do it anymore.
But napkin architecture. Collect per connection stats and report on connection close (you can do a lot with tcp_info or something something quic). That goes into some big map/reduce whatever data pipeline.
The pipeline ends up with some recommended initial segment limit and a mss suggestion [1], you can probably fit both of those into 8-bits. For ipv4, you could probably just put them into a 16 MB lookup table... shift off the last octet of the address and that's your index into the table. For ipv6 it's trickier, the address space is too big; there's techniques though.
At google scale, they can probably regenerate this data hourly, but weekly would probably be plenty fast.
[1] This is it's own rant (and hopefully it's outdated) but mss on a syn+ack should really start at the lower of what you can accept and what the client told you they can. Instead, consensus has been to always send what you can accept. But path mtu doesn't always work, so a lot of services just send a reduced mss. If you have the infrastructure, it's actually pretty easy to tell if clients can send you full mtu packets or not... with per network data, you could have 4 reasonable options, reflect sender, reflect sender minus 8 (pppoe), reflect sender minus 20 (ipip tunnel), reflect sender minus 28 (ipip tunnel and pppoe). If you have no data for a network, random select.
That's about local link loss; you at best get buffer bloat from confusing the wired desktop and the wireless laptop that share a 800~1200 Mbit/s DOCSIS downlink.
Or worse, different service tiers from a neighborhood getting bundled via CGNAT; though that's a clear argument for IPv6.
Spiders that send too much traffic tend to get blocked, so they are already having to contend with some sort of coordination. Whatever system they’re using for that coordination (server affinity being the simplest) can also propagate the congestion windows.
They also miss the fact that even with an initcwnd of 10 the TLS negotiation isn’t going to consume it so the window starts growing long before content is actually sent.
Plus there’s no discussion of things like packet pacing
TLS without 0-rtt gets you one round trip of not too many packets, maybe 4 unless your certificates are really large. That helps your initial window for content, but not by a whole lot.
OS/2 had an evolving marketing claim of "better DOS than DOS" and "better Windows than Windows" and they both were believable for a time. The Windows one collapsed quickly with Win95 and sprawling APIs (DirectX, IE, etc).
It exists in that interesting but obsolete interstitial space alongside BeOS of very well done single user OS.
Both Power and z are many billion dollar businesses each. Banking and other high finance is the stronghold for both. IBM still seems proud of z, Power seems merely tolerated these days which is a shame because it is a nice ISA and the systems are very nice too.
> in what way has it gone uphill versus just using Debian?
Their lawyers' willingness to risk shipping pre-built zfs kernel modules (that are always in sync with the kernel). Pretty important if you're into that sort of thing, it's easier to remove cruft once post-install than to keep an eye on DKMS for years (making sure that it hasn't disassembled itself and continues working).
Anthony Marinelli (guy behind MJ's Thriller synth jams) and Tim Pierce (accomplished session guitarist) riff on this https://youtu.be/OzuADujnEhQ?t=1205 recently. This whole video is a treat, as are most of Marinelli's.
When I see AI salesmen thinking they can attack into art, I think they naively see it as inherently imprecise or arbitrary, and they think because their technology has these properties it will easily cross over. This is going to lead to a lot of faux pas (remember NFTs?); it would be prudent to attack problems where some kind of correctness can be mechanically judged... OCR and software development are reasonable verticals at opposite ends of complexity to focus on, and pursue artistic rendition in a more experimental way letting artists approach the technology and show how it is useful.
These things won't replace rock stars, they will (or at least want to) replace the vast majority of the industry which is tv shows, movies, ads, etc, which the disclaimer as the end alludes to.
The thing I notice time and again in all this is they want you to believe technology is displacing labor at one end but there's usually a lot of retraining consumers/society to accept something qualitatively different to cover up or re-conceive what was. That's not a moral judgement, just an observation. But the end result is usually the same, some group of current or wannabe oligarchs playing musical chairs at the top without regard for the rest of the system.
The "Don’t Let Me Go" and "Yellow Bus Jam" examples made me laugh out loud. This kind of thing would be great for a cyberpunk game that dynamically generates a reality, with (unintentional?) faux pas and jank.
If you are an artist you could always slice, embellish, or otherwise process outputs into something so I guess it's not totally silly. But I get at best real estate video vibes, or unironic early '90s clip art and Comic Sans vibes and presumably some team of expensive marketers worked hard to select these examples, which is doubly hilarious.
I remember doing something similar with Bull, a now obscure but once somewhat formidable mainframe and UNIX company.
I had a DPX/20, which was for that model just a rebadged Microchannel IBM RS/6000. I was 12 and trying to figure out how to use it. I knew what I was in for, that I needed to load AIX, but the "firmware" on these are bare bones and you don't have much to go on once it passes off control if you don't know if your console is working in the first place.
Given what I now know, they were surprisingly kind and passed the call around until landing it with an old timer that was familiar with the model and somewhat bemused that I had it and was trying to use it but didn't really know how to help me remotely.
Eventually someone on Usenet clued me in that I needed more pins on my serial cable connected, and it all turned out to be a nice learning opportunity building the GNU toolchain and AMP stack on it.
There was some serendipity years later when I moved back to Phoenix after school and joined a newly formed PostgreSQL user group. Bull was trying to pivot into that open DB market and still had a huge campus in Phoenix where they held the meetings. It seemed sparsely occupied and the writing was on the wall that was all going away (it eventually did a handful of years ago), but I was still a bit wide eyed now having some notion of the campus's historical significance as Honeywell, in the Multics project, and other things. And that my naive call from back then was almost certainly answered in that facility not that far from where I was struggling.
Groupe Bull? I remember them. I wonder how we can give kids today that same sense of wonder and joy of tinkering we had. I guess today's equivalent would involve robotics, since personal computers are all played out.
Maker spaces seem to have the right hacker ethos around explore, tinker, finish.
I'm on a bunch of retrocomputing discords where youth still find obscure old systems, typically Sun, SGI and the parallel universe IBM systems (mainframes and as/400 line), and manage to figure them out.
The most astonishing thing about this is it was done under forbearance of the ISA.. PA-RISC ISA was basically frozen in 1996 and they were able to ride that at the top for years. For instance PA-RISC doesn't really have appropriate instructions for desirable atomic operations. But it led to working on the right problems, a hardwired control RISCy chip that happened to be philosophically similar to the survivor POWER.
Then later from a retrocomputing standpoint, I've come to see it is pretty fascinating:
1) The sheer volume of commercial software.. which is readily available on winworld, vetusware, and archive.org. A lot of it with sometimes awesome character-mode UIs (Borland's early IDEs are really spectacular, Lotus 1-2-3, and WordPerfect are still taken seriously by some users).
2) The memory model is quixotic and an interesting homage to the chaotic evolution of x86 that most later operating systems elide by requiring a 386. The 286 and 386 have drastically different protection schemes. EMS and XMS. The eventual DOS extenders and standards like VCPI, DPMI. It's honestly a mess but somehow interesting to see how people solved difficult problems.