Thats a pretty snarky thing to say about Apple. They were arguably the pioneers in OS UX... granted, its not the end all, be all, but still. You could do worse.
Who is "they"? The employees at Apple when the HIG was first published in 1986, 40 years ago? That Apple is dead, what you see before you is an empty and rotted husk.
When I began at Apple in 1995, we followed "Tog on Interface" to the letter. It was not uncommon to expect arguments over what the Right way was during lunch.
I watched as Steve Jobs came back to Apple—he really took hold of the reins of UX (aided by his team of designers).
Personally, (and I say this as it is often a matter of taste) I didn't care for a lot of it.
A simple example: the URL field of Safari should have been, to my Tog sensibilities, an editable text field only. Perhaps somewhere (below?, to the right?) you might include a progress bar to indicate the page loading. But a designer (I will not name, ha ha) came up with a combined textfield/progress bar. It looked to my eye as though, as the page loaded, the text was being selected!
Jobs loved it though.
It was then I think that Apple departed "Tog" for these "one-off" UX experiments.
I have rationalized this move away from a standard since, with the advent of the web, the customer is now being bombarded with all manner of UX—ought to be comfortable with one-off UX.
(Thankfully I see that now we have a thin line that seems to grow along the lower edge of the URL field.)
First is not the same as best. First is not even the same as good. First is only first. Just because someone was the pioneer doesn't mean they should be considered a positive example.
Introduced a concept decades ago in no way implies that their current implementation of the concept is at all ideal or market leading.
“I would argue that…” is a weaker statement, because it ends with an implied “…but since I don’t care that much, I’m not ‘seriously’ arguing that.” It’s not at all equivalent to the strong statement “I argue that…”, which has no such qualifier.
Why cure yourself of useful conversational nuance?
Even though I run it full time now personally; I still think Linux has massive problems something like Windows or macOS don't have: app development. You can't target a thing, you have to target all the things and bloat your app like crazy so you might as well just ship a chromium based app because its practically the same thing anyway (shipping an entire userland because its not stable anyway)
If there were instead 10 viable and competitive desktop operating system with no clear leader, and macos and windows were there just among the others.. wouldn't you try to target as many as you can? Maybe we can think of linux itself as a microcosm of OSes we never got to have, and you have to target as many variants as you can in order for no dominant force to emerge. It ain't pretty but its what we have..
The part of the "microcosm" that prevents you from being able to easily compile a binary and have it run on a wide variety of distros doesn't have any upside I can see. The fact that you have to jump through hoops to target particular glibc symbol versions and that a stable OpenSSL ABI gets rug-pulled in new distro versions every few years aren't key to any benefits of distro/OS diversity. What would suffer if gcc/clang had a `--min-glibc-version=...` flag and OpenSSL settled on a long-term stable ABI subset for establishing TLS connections?
The way this all gets worked around is that people come up with stuff like Docker or Flatpak that ship their own copies of as many dependencies as possible. The disadvantage is that now I can't just patch an OpenSSL vulnerability by updating the system's copy of OpenSSL, the way Windows can for all software built on SChannel.
This is just not true. You can still write GTK2 or SDL apps, you just need to package your app for the target distro or open source it because it's an open-source-first ecosystem.
If you're looking for binary stability and to ship your app as a file, ELF is extremely stable. If your app accesses files, accesses the network through sockets, and use stable libraries like SDL or GTK it will work fine as a regular binary and be easy to ship. People just don't want to write their apps in C, when the operating system is designed for that.
Many native apps like Blender, Firefox, etc ship portable Linux x64 and arm64 binaries as tar gz files. This works fine. You can also use flatpak if you want automatic cross platform updates but yes, the format is unfortunately bloated.
It's not that easy to ship a JavaScript app on other OSes either and electron apps abound there too.
What does ELF being stable or people not writing apps in C have to do with Linux binary compatibility? No matter what language you use, it’s either dynamically linking to the distro’s libc or using Linux system calls directly.
Also, I recommend taking a gander at what the Linux build process/linking looks like for large apps that “just work” out of the box like Firefox or Chromium. There’s games they have to play just to get consistent glibc symbol versions, and basically anything graphics/GUI related has to do a bunch of `dlopen`s at runtime.
Flatpak and similar take a cop-out by bundling their own copies of glibc and other libraries, and then doing a bunch of hacks to get the system’s userspace graphics libraries to work inside the container.
One of my CS professors told the story about when MCSFT demonstrated their implementation of the Korn shell at a USENIX conference. Gasbag on stage tells the guy in the question line that he (questioner) has misinterpreted something in the software's original intention. Gasbag figures out something is up when half the room cracked up. The guy at the mic was David Korn, author of the Korn shell.
If you watch the video, it actually falls several sidewalk tiles away and he has to go pick it up. From the text of the blog, I had assumed he was using AI to actually land it directly on a person’s head, which would’ve been crazy impressive.
I mean, the site is pretty blatant viral marketing for both his drop-shipped-hats-from-china side hustle and (I'm going to go out on a wild limb here and guess) his employer's ML-dataset-management-related startup.
I wish cool stuff like this wasn't always sullied by the slimy feeling from it only being done to draw attention to some startup sitting smack in the middle of the trendiest buzzwords of the month.
OpenCV was not the "AI" here, the "AI" was a computer vision model trained at the roboflow website that he mentioned multiple times and that he used in the line commented with "# Directly pass the frames to the Roboflow model".
I can assure you that if you develop a system to accurately place objects (bombs, say) on top of people and post the code on the open internet for everyone to see, the government will indeed have some critical question for you.
Accurately placing heavy, aerodynamic objects onto people when you start out directly above them is not very difficult. The hard parts are either placing the object on top of the person from a few hundred or thousand miles away, or - in this case - placing an object that tends to flutter rather than follow a ballistic trajectory.
I can assure you that you have no idea what you're talking about, starting with the fact that you obviously didn't watch the video.
It isn't aiming anything. It isn't adjusting for anything. It's doing so from a stationary point.
The ML isn't used for anything other than a simple "is there the thing I was trained to look for within this area?" It's basically a ML version of something one could pretty easily do in OpenCV.
There's NOTHING about this useful for aerial bombing, which involves dozens of problems much harder than "this is the spot you should aim for."
There are probably dozens of smartphone apps for helping marksmen calculate adjustments that are about a hundred times more complicated, and more useful for (potentially) hurting people, than this.
I can't stand people who act like it's reasonable for the government to monitor and harass people for stuff like this. The second our government is harassing him or the SMH guy, I'm moving to Canada.
You've replied to somebody talking about "if somebody developed (something not in this blog post)" with a long angry rant as if they had imagined the blog post claimed it had developed that thing.
It is not that they haven't read the article but they are commenting on a thread which is mussing about how much the government would be interested in if (IF!) someone would develop what the article title implies they developed but hasn't in reality.
The RC plane fandom on youtube has started to manufacture and drop fake bombs onto miniature targets. The bombs even have fins. I kinda wonder how long until they start adding electronics and flaps to start guiding the bomb, and how far they can get before they start to have feds knocking on their doors. I'd be interested in working on it but I'd prefer to keep my TSA precheck clearance.
Now if you had terminal guidance... Put flaps on the hat, and use shape-memory alloy wire and a coin cell to actuate them. The hats follow a laser beam projected by the drop unit. Minimal electronics required in the hat. This is how some "smart bombs" work.
> Imagine using AI to drop an object and it falls perfectly where you want it.
There is a fantasy series that depicts this as a game that two young gods would play together when they were growing up. (Or rather, since one of them had vastly superior foresight to the other one, he'd bully his brother into playing with him.)