That data had to be encoded in a certain way which would lead to unavoidable exploitation in every conforming implementation. For example, PDF permits embedded JavaScript and… that has not gone well.
The last article I read about him implied that he’s intent on working and that working with him was… intense. But that was a few years ago.
I like to imagine there’s an inner sanctum in a secure sub-basement of Microsoft where a couple dozen cracked kernel developers work quietly… except when Dave Cutler asks them to come into his personal lab through the three foot thick blast doors and man-trap so he can yell at them about a bug he found.
I had a job at a place in college, back 1997–2000, that was run by a big DEC Alpha server running VMS. VMS was dying then.
I was just a lowly kid programmer working on a side project, so I can't tell you whether it's still uniquely good at something to justify its usage today. It worked. But it was weird and arcane (not that Unix isn't, but Unix won) and using it today for a new project would come with a lot of friction.
HP LaserJet 4s squarely date TFA’s prank in the early-mid 90s. I can agree with you that lame corporate April Fool’s Day jokes on the Internet are overdone; but 1990s-era campus sysadmin’ing ruled. Sysadmins kept a close eye on things to ensure no one (especially the servers) got hurt, but computer geeks were far from mainstream and a spirit of playful tolerance and taking-care-of-our-own prevailed. Well do I remember telneting to sendmail on port 25 and sending spoofed email to classmates…
The university-wide email was probably too much but displaying INSERT 5 CENTS on an HP LaserJet 4 for a day is great.
I am by background a C++ developer, not a web developer, but Tauri has intrigued me as a cross-platform app framework.
The things that have been frustrating are the documentation, the variety of options (pick a front end framework, pick a build tool), and the sense of being on your own once you get Hello, World! working. I wish there were more examples with using 1+ different Rust web service frameworks, how to manage state, and so on. It has the feel of an older-style open source project where the developers are in love with offering options rather than committing to a strong developer experience.
The allure for me of Tauri is the combination of creating simple portable apps, maybe one day being able to get a frontend JS developer to improve UI/UX, using Rust on the backend (with the ability to bundle in other native libraries), and leaning more heavily on cargo than on the godawfulness of JS-flavor-of-the-minute build tool.
I can vibe-code up some basic UI but it all feels a bit precarious.
It's possible I don't know what I'm talking about, but I believe Dioxus is also targetting cross-platform app development and might be a reasonable fit for you. Here the frontend is written in Rust in their provided UI framework (designed with concepts from the web DOM and learnings from existing web frameworks) and it might be a more familiar approach to a C++ dev who has used Qt or whatever before.
I understand it also has experimental support (possibly still under development?) for a servo/verso render engine, which is why I mention it.
Not compliant enough to browse the open web without experiencing glitches, but compliant enough for your own frontend code that you test against Verso/Servo.
I imagine most users will continue using Tauri with native webviews. But if consistency between different webviews ever becomes an issue you now have the option to ship a Verso webview to have more cross-platform consistency
The world had -just- gotten its parents and aunts and uncles and maybe some grandparents trained up on what an IP address is and looks like, and THEN the IETF approved this idiotic standard that said “forget all that, now it’s a bunch of hexadecimal digits with these confusing rules about colons to be ‘user-friendly.’ Also, be sure not to confuse your IPv6 address with the MAC address that’s printed on your devices.”
Whatever the technical merits/demerits of the protocol, I believe this brain-damaged approach to describing addresses is behind the “failure” of IPv6.
MAC addresses seem to thrive, both in Ethernet and Bluetooth. Sure, it's only 48 bits, but it's a bunch of hexadecimal digits with colons.
IPv6 failed because (1) it demanded too much of the networking hardware, (2) was a stupidly complex beast compared to IPv4 and (3) had the catch-22 of requiring both producers and consumers to upgrade at the same time to even make it usable, let alone widely beneficial. (Discounting 6to4 and other local patches.)
However, since I upgraded my AP, I realized my ISP actually does support IPv6 really well, so perhaps we're close now...
TFA talks about addresses being too long and the compromise to support variable-length addresses. 64-bits would have been fine.
Grandparent is engaging in the classic tech industry tradition of talking confidently about something they have very little knowledge about. Surely old people learning hex isn't what's stopping IPv6 adoption but why not, they explained IP addresses it to their Nana once at Christmas (you never visit!) and she nodded politely and so they're an expert on networking.
I don't think that it can be argued that IPv6 was complex and demanded more from the networking hardware.
On the contrary, the IPv6 headers have been explicitly simplified a lot in comparison with IPv4.
The increased complexity for the networking hardware has come only from the requirement of supporting both IPv4 and IPv6, instead of only one of them.
In my opinion, the most significant mistake of IPv6 has been that the IPv4 address space has not been considered a subset of the IPv6 address space.
Had this been done, it would have been relatively easy to mix arbitrarily in a network ancient IPv4 equipment that has never been updated with equipment implementing IPv6 and the gradual transition between the two protocols would have been very easy.
> In my opinion, the most significant mistake of IPv6 has been that the IPv4 address space has not been considered a subset of the IPv6 address space.
This suggestion has gotten brought up often over the past few decades.
Remember that every IP packet has a destination and source address. There is no physical way for a v4-only host to directly communicate with another host which has an address from a larger address space. Doing so requires NAT. Which can be and is used today for v4-v6 connectivity.
I don't think that it's a foregone conclusion that IPv6 "failed".
I'm rolling out services on v6 only now for my hobby projects. Just next week, I'll be switching over our company backups to v6 because I can have a dedicated port 22 on the backup machine instead of port forwarding. I've been running company VPNs v6-only (inside the VPNs) for some years now.
Our corporate services and my private hobby projects are all v6-enabled (in addition to v4), the latter since 2018 or something.
A non-techie friend of mine has CGNAT at home on v4, and simply used his consumer-grade router to open a port to the v6 address of his NAS. He and some friends and colleagues from his Uni been using that for half a year before they stumbled into a situation where somebody had no v6 address and could not access the NAS.
Aunts and uncles and grandparents rarely have to deal with MACs. Back in the day it was still quite common to have to manually enter some IP-addresses, although it was probably not common for "non-technical" people understand anything about IPv4 adresses either.
I for one to still to this day and age don't really understand IPv6 addresses, and admittedly haven't tried much. I typically disable IPv6 at the kernel because it's been nothing but trouble.
reply