Clearly an attack on general purpose computing materialized, but it came about in a very different way from what was predicted here. If I had to summarize the overall thesis of the article it would be this: sectors of the economy other than the tech industry want computers to stop running programs that lose them money, and the government wants computers to stop running programs that break the law. In the future, with Hollywood as the vanguard, laws and regulations will stamp out general purpose computing and replace that model with more easily regulated computing appliances with a clearly defined purpose. To quote the article on how computers and an open internet would be stopped:
>Regardless of whether you think [things like 3D printed guns] are real problems or hysterical fears, they are, nevertheless, the political currency of lobbies and interest groups far more influential than Hollywood and big content. Every one of them will arrive at the same place: “Can’t you just make us a general-purpose computer that runs all the programs, except the ones that scare and anger us? Can’t you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?”
I don't think that was an accurate prediction of the past 10 years. Hollywood lost the copyright wars: even the most locked down TV dongle will happily play pirated movies with the right apps, often available from the official app store. SOPA failed, and to this day the US is free from IP and domain blocking (though many other democratic countries are now under court ordered blocking regimes). UEFI didn't kill the freedom to run any OS on your PC. User-modifiable device firmware wasn't banned, and thanks to the right to repair movement the law is actually being used to open up devices.
Instead, the threat to general purpose computing came from the tech industry itself: Apple, Google, and (to a lesser extent) Microsoft, the same companies that used their influence to kill SOPA, locked down devices and operating systems to protect their own profits, not to protect non-tech sectors or to appease regulators.
> the government wants computers to stop running programs that break the law
I thought the government was more concerned with surveillance than law-breaking programs. Or were you saying that they want us to stop running things like VPNs?
In the article Doctorow talks about both surveillance and regulatory concerns, like vehicle safety and the ability to create weapons with 3D printers. If there was a law requiring surveillance backdoors in operating systems (as he speculated repressive governments would enforce with UEFI) then both are about law-breaking programs.
The guns are really easily tackled though. Instead of restricting guns (and parts thereof), just restrict ammunition.
I know some people can create ammo but they need the casings which could also be restricted. And really, whoever wants to can create a weapon from a few pipes and other easily fabricated parts. Shinzo Abe's murderer did just that.
3D printers are not gamechangers here. They just make things a little easier.
Well, in the US there's currently a lot of effort going into attempts to tackle "ghost guns". Restricting ammunition may be better from a practical perspective (I don't know for sure) but it's harder from a 2A perspective. But none of it is targeting the 3D printers as Doctorow feared. I don't know if this is because the impact of 3D printing on homemade guns and other potentially lawbreaking activities has been limited or something else. I do think there's a general suspicion of having devices try to do something about lawbreaking activity they might enable, think of the backlash against Apple's CSAM scanning proposal. This might be Hollywood's fault, the idea of devices playing cop immediately makes people think of their computer ratting them out for watching a pirated movie.
(I know almost nothing about guns and this comment is not advocating for any specific policy)
> they need the casings which could also be restricted
Are casings particularly hard to make? It seems like anyone with the tools and skills to make a gun from parts could also make little cylinders that fit in that gun. Or just do the cannon thing, have a vaguely spherical ball and some explosive stuff packed behind it.
Yeah perhaps in the US this might not work. I was thinking more of europe where even ammunition isn't readily available to buy already (and people don't have stockpiles except highly regulated sportshooters).
But I thought the part where the firing pin hits the cartridge gets damaged and the primer there isn't easily replaced. I'm probably wrong, I never even held a real gun.
Ammunition is easy to make when all components are readily available. Each round of modern ammo requires a bullet, a case, a load of gunpowder, and a primer. Bullets are relatively easy to cast. Gunpowder can be manufactured, but making smokeless from scratch is not easy, while the traditional black gunpowder gums up semi-auto action very fast. Cases and especially primers are very hard to make right.
Smokeless for semi automatic is strangely specific. This is not what people will make if you somehow restrict ammo casings (which you can make, and the head is simpler), so I am not sure why those requirements equate to being effective. When talking about technical barriers, it is considered obvious as to how they can and will be circumvented. Ammo is the same, but somehow you have made the leap that dropping standards will be a deterrent. This is ridiculous.
Smokeless for semi-automatic reflects the historical experience of trying to use black powder in repeating guns (even such as mag-fed bolt-actions of late 19th century).
As for technical barriers, it is obvious that they will be circumvented, yes - but experience also shows that most people don't bother, which is exactly why such barriers are still promoted by the industries that benefit from them. It's the same story for ammo and guns.
Note that I haven't said anything about the desirability of such barriers in either case. FWIW I actually own tools to manufacture "ghost guns". Nevertheless, it seems obvious to me that any such restrictions on ammo and ammo components would be effective in exactly the same sense that DRM is effective - by reducing the number of people involved in the activity by several orders of magnitude.
> As for technical barriers, it is obvious that they will be circumvented, yes
> any such restrictions on ammo and ammo components would be effective in exactly the same sense that DRM is effective
My original objection was that guns are "easily tackled" as an incorrect assertion. You responded with a derail into moving the goalpost to "it's partially effective". Your responses are not compelling reasons to modify my stance.
> I don't think that was an accurate prediction of the past 10 years. Hollywood lost the copyright wars: even the most locked down TV dongle will happily play pirated movies with the right apps
Did they?
Remember how "music lockers" like Google Play Music and the original incantation of Amazon Music, used to be a thing? Yeah, Big Music killed that. The closest thing we have now is music matching services that will happily nuke your music when the publisher delists those compositions for whatever reason.
Plex and the like make organizing and consuming pirated movies easy, but finding the source of those movies is still not trivial, and people _still_ get scary emails from their ISPs if they detect that enough of your traffic is being used for this purpose.
The Netflix app does not allow AirPlay (used to) or Bluetooth streaming (also used to).
It's still very much a cat and mouse game, and the cat is getting smarter every day.
> UEFI didn't kill the freedom to run any OS on your PC.
With the new Secure Boot rules this ‘goal’ is ever closer. By default, new computers must now only boot the Microsoft boot loader, which can only boot Windows.
The worry with secure boot would be manufacturers taking away the ability to manage/disable it on consumer devices. This rule doesn't seem to get any closer to that scenario, it's a change in the default configuration of "secured core" systems marketed for enterprise use. Secure boot can still be managed or disabled, and the change only impacts big Linux distros that got MS to sign their bootloaders. For the general case of being able to run any OS it's been necessary to go into BIOS settings since computers started shipping with secure boot by default around the time of Windows 10 release.
General-purpose computing is just too economically valuable to be gotten rid of. Those who might otherwise want all machines to be locked down have no choice but to acknowledge this. The FUD has been going around for a decade and a half, and we've only ever seen it play out in small corners of the industry, not in the big picture.
Dunno. Apple and Microsoft seem to be heading towards forcing users to install things only from their app store. Apple checks every binary you run against a black list, which pretty much flew under the radar ... till the service broken.
Do you really think there would be a big backlash if Apple and Microsoft prevented "sideloading" and required all application installs to be from their official app stores? It's already pretty much the default for phones, and tablets.
We aren't there yet, but it does seem to be the direction we are going.
If this precluded the ability to eg install a program via curl or brew, or even just a .dmg you download from your browser, you bet there is gonna be hell to pay.
> It's already pretty much the default for phones, and tablets.
It's unfortunate they made it that far with smartphones, but there is much less precendent for thinking of a phone as a "general purpose computer" in comparison to a laptop/desktop.
I'm sure the complaints will be loud, especially on this site. Will it be a large enough fraction of users to make apple care? Not so sure.
Increasing security/limitations now means that you can't install things in your /home any more (at least without root). I used to be able to install opensc for using a badge with ssh. Now I have to install cask ... which requires root/admin privs. Similarly if I say brew install iterm2, when I run it it just says "iTerm.app can't be opened because Apple cannot check it for malicious software"
In the case of Apple in particular, I think the bigger cry will be from tech companies shipping paid apps who see this as a first step towards expanding Apple's 30% cut in the app store to all native applications.
You might get lucky at first, while they're still slowly turning up the heat on the proverbial frog's bath. However, the way UI/UX design is moving, options of any sort are being eroded away. Just so things can be "easy".
Take media sharing / device discovery for example. It has gotten to the point that just about any consumer product either has to communicate with some external server to find devices in your home, or use one of the multitude of zeroconf / airplay / mDNS / etc type protocols. It's gotten to the point that you can't even build a home network with more than one subnet in it. If I want my (most like fully of security holes) IP cameras to not exist on the same VLAN as my servers, well, good luck getting to two to talk to each other. Same for printers, media players, speakers, TVs, receivers, etc. If it isn't on the same subnet, it might as well not exist. Could this be easily solved with an option of typing an IP address into a config page? Yes. Does any product offer this? None that I've seen.
Sorry for the ranty example; I've been fighting that issue recently.
That's a critical difference between MacOS and iOS. The former has a gatekeeper that prevents you from running software from unapproved sources; it can be turned off. The iOS gatekeeper can only be disabled by subterfuge (including paying Apple for a developer account).
> If this precluded the ability to eg install a program via curl or brew, or even just a .dmg you download from your browser
I expect that governments will offer a "compromise" which is that you can run these "unapproved" apps, but they must be signed by a developer key which is tied to a domain name, and that domain name must be checked (by the OS) against a blacklist of banned applications/developers/websites.
That should be enough to block any encrypted messaging apps without backdoors, or apps like Tor, or bittorrent clients.
There could be a cat-and-mouse game as developers try to rename their apps, generate new keys, and register new domains, but when governments notice that their ability to censor is at stake, they will spare no expense on whichever intelligence agency or defence contractor is tasked with keeping the blacklists updated faster than any banned applications can reach mass adoption.
In parallel to this, governments will require that ISPs only let devices access the internet if they pass a "secure boot" check, which confirms that the device is running an operating system which enforces this blacklist.
We're probably less than 5 years away from some G7/EU country mandating this system, with the timeline only limited by the rate of adoption of technology like Windows 11 and Pluton. Older devices (and those running "unapproved" OSes) will be limited to specific ports and IP ranges, for "cyber-security" reasons.
Depends on how they would do it. If Apple were to replace brew with an official package manager which just works, most people wouldn't bat an eye.
The way Apple could do it is by introducing the said package manager for iPadOS. iPadOS could soon grow to do everything what MacOS does but only with certain restrictions. If it's faster but cheaper, most people would then just buy it over a general purpose laptops.
> General-purpose computing is just too economically valuable to be gotten rid of.
General-purpose computing is a just a local optimum for profit extraction. Nevertheless it feels like guessing the correct answer for the wrong reasons. I welcome and embrace the FUD.
We might simplify the topic: Copyright is also a network - "all in the name of preserving Top 40 music, reality TV shows, and Ashton Kutcher movies."
Computers are a network even without internet. (you can carry floppies) This network does everything. It touches everything. We all want to pay for it. Our money goes towards making the network more useful.
The copyright network really has very few players interested in preserving and expanding it but we all pay for it.
> when the nations of the world send their U.N. missions to Geneva, they send water experts, not copyright experts. They send health experts, not copyright experts. They send agriculture experts, not copyright experts, because copyright is just not as important.
But it consumes the same budget. We have finite resources, finite expertise and finite moving hands. It means people are dying, health is declining, agriculture is hindered, water is polluted etc
Meanwhile youtube is deleting just about everything. 3 strikes and everything you've ever published there is gone. Knowledge is lost. We don't know what impact it has but it is safe to assume stuff wont be what it should have been if the maker cant see videos important to his task.
More obvious it gets with scientific publications that cite lists of papers and books that are no longer available.
We need a new network to make sure people get paid for the useful things they make. It can simply be paid for in advance. It's not a challenge.
coming up with the network is not the challenge. hasn't been since www. the challenge is making the right incentives and popular adoption of the network.
Just shut it down. As the monstrosity won so many small fights it counts many components that we can remove one by one until the entire chain of parasitic entities is gone. If anything of value is lost we create solutions for those few things specifically. Since many depend on the income we will have to gradually tune it down.
We can make donations, one can do crowd sourcing and we can create funds that assign budgets by committee. That profits wont randomly grow madly out of proportion with the work is not a bug but a feature.
Roberts space industry suggests we are missing out in a huge way. First people were willing to fold over large sums of money for a game that was not made(!?) Then people were willing to fold over large sums of money for a game that was incredibly delayed(!???) And now people are actually enjoying being part of the development process(!????) Things are progressing towards a finished product but its so overly ambitious that complex works add up to very little progress. I never would have imagined people would put up with that but in hindsight it seems to make sense.
"It’s tempting to stop the story here and conclude that the problem is that lawmakers are either clueless or evil, or possibly evilly clueless. This is not a very satisfying place to go, because it’s fundamentally a counsel of despair; it suggests that our problems cannot be solved for so long as stupidity and evilness are present in the halls of power, which is to say they will never be solved."
Sadly I think occam's razor applies here past this statement.
>Today we have marketing departments that say things such as “we don’t need computers, we need appliances. Make me a computer that doesn’t run every program, just a program that does this specialized task, like streaming audio, or routing packets, or playing Xbox games, and make sure it doesn’t run programs that I haven’t authorized that might undermine our profits.”
Today? More like the 1980s! Nintendo invented the whole App Store business model decades prior to the iPhone, and for reasons a lot more insidious than Apple's.
This whole article has aged like milk. Or, more charitably; it's a snapshot in time of the worries of people a decade ago, many of which never came to pass. Since this article was written, SOPA failed due to an unprecedented public protest, encryption bans have sort of sputtered and gone nowhere, and the EU is legally mandating that iPhones run unauthorized software while iPhone buyers are clutching their pearls about it.
That last one is particularly revealing. You see, the security landscape for individual users is far bleaker now than it was back in 2011. A decade ago it did not make financial sense to hack the average user, and now it does. Regular users - or at least some of them - are the ones still prosecuting the War on General-Purpose Computing. Governments and copyright maximalists are on the side of keeping things open. The spymasters like being able to drop their 0days in plausibly-deniable unsigned bait apps, after all.
The case for copyright maximalists is less obvious. You might think that they'd prefer more DRM, because it makes things easier for them. And that's true up to a point. But there's also no way to standardize it. It just gives you monopolization everywhere. In fact, it's a repeat of the days of iTunes, where Apple had the entire music industry at their feet because they had so badly misjudged music downloads. The RIAA went DRM-free because it let them sell music the way they wanted to. And now we have Epic Games, one of the larger fishes in the game industry pond, trying to legally undermine the whole basis of the console industry so they can sell more Fortnite skins.
I do still think the section about copyright being fundamentally boring still holds water. But it's less boring than it used to be back when the US pulled everyone into TRIPS and WIPO. A large portion of today's media consumption is on platforms that don't actually pay their way, at least not to the liking of the copyright maximalists. The war they fight is not on general-purpose computing, but on accessible social video platforms; the ones whose creators are crushed under the burden of having to constantly validate their legality and complain rabidly about it.
This, too, has historical precedent. Google and Wikipedia were able to spook people into spamming their local Congresspeople out of passing SOPA/PIPA with a well-timed protest campaign. Going back further, while Napster may have had some of the most inept and hilariously bad legal argumentation[1], the public took their side - not the law's. When the public does care about copyright, it's almost always to weaken it or keep it from getting stronger.
Of course this could also just get swept up into World War IV: The Culture Wars, like everything else inevitably does. All the international treaties that were designed to make copyright untouchable had their days numbered the moment that right-wing populists tried to wrestle countries out of the global trade system. Hell, there's a Republican congressperson that's genuinely trying to sell the GOP on going back to 28-year copyright terms purely to pwn the libs over at Disney.
[0] There have been several notable instances in which really scummy mobile app marketing tactics have passed through App Review for whatever reason.
[1] No, seriously, they tried arguing that they should be legally treated like radio because once someone downloaded a full and complete 320kbps MP3 copy of one particular hit song, they might go out and buy the full album full of other tracks they don't care about.
> Today? More like the 1980s! Nintendo invented the whole App Store business model decades prior to the iPhone, and for reasons a lot more insidious than Apple's.
I don't think Nintendo had any online consoles in the '80s. Are you referring to the NES refusing to run cartridges that were not authorized by Nintendo?
The usual explanation is that locking down the NES was to avoid the issues that caused the massive collapse of the home video game market starting in 1983 and ending with the NES [1].
I'm not quite sure what you mean by your question. Craphound is Cory Doctorow's website but he has long been a contributor to Boing Boing. Are you asking which website it appeared on first? I don't know, it is at least possible it was nearly simultaneous.
15 years ago, if you were computing you were almost certainly using a general purpose computer. Phones were more locked down, but they also were basically just for texting and calling, and could never serve as a primary device.
Now, most people spend a majority of their time in a locked down environment (phones and tablets) consuming locked down content (subscription only, on approved devices). The default 15 years ago was open, now it is default closed.
Yup. Content was typically owned in physical form as a DVD or CD (maybe mp3?) and wasn’t subject to disappearing because a company went bankrupt or merely changed license agreements (stuff that happens now even for “bought” items).
We’ve also abstracted away everything from files, for long enough that the idea of not streaming something but playing it locally is becoming more and more niche. Same for applications, which have gone more and more to web based subscription models.
It's insane, really. Many of my friends have no idea what to do when I send them an .mp3 or an .mp4 file if it doesn't instantly embed as a media player in their chat application of choice. For them, the only way to share images or videos is by uploading it to a commercial, ad-infested, sharing site.
By catering to the lowest common denominator we are creating a tech-illiterate society. This is on all of us who dumb down our features to make sure the users can understand.
Apple is intentionally hostile to sharing files and makes it very hard. Improved slightly in recent IOS versions, but still super hard. You can’t just email yourself mp3s and Play them like normal music on your phone, you have to use a third party app or a PC or something to transfer them to iTunes.
_RapidGator, MegaUpload, and Mediafire angrily enter the chat_
in all seriousness, there really are very few reasons for sharing audio files these days. The only ones I can think of are:
- finding a work that is not online (like a specific live record or like 85% of early 80s hardcore that hasn't been remastered),
- pirating (which music streaming services has made ubiquitous for 99.95% of people who consume music; thanks, Sean Parker!), or
- audiophiles buying $10,000 balanced headphone cables with gold TRS jacks (because mics don't belong in headphones, _obviously_)who only listen to test tracks in FLAC format (who don't have newer iPhones anyway)
as far as i remember, iPhones were able to play loose audio files, but you couldn't catalogue them into iTunes, which was annoying given that iPhoneOS (only 2010's kids remember this) didn't have a built-in file manager. moreover, most of those files came compressed (_Mediafire's anger intensifies_), and iPhones didn't have a publically-usable extraction utility, which made working with them a huge chore.
I think the MP3 player in its earliest form was a concession that we lived with (because it was easier than dealing with jackets of CDs and anti-skip sucking) that was absolutely destined for a streaming-only world (because my gut says that MOST people never wanted to get into the audio collection business; they only want to listen to their favorite songs from their glory days in college)
Yup. I taught a basic HTML course for grade school age kids this winter.
I had to start with how to make a file and folder and what the desktop was, etc. Basically, the issue was that their computing experience was all platform based. Everything for them was click a link, use the browser, walled google-classroom gardens, etc.
When I was a kid, the younger you were the more computer-savvy you were. I figured it was due to being exposed to computers earlier in life, and I thought that in the future young people would be tech geniuses from growing up immersed in the internet. I was a fool.
> The default 15 years ago was open, now it is default closed.
Yes, but 15 years ago far fewer people did far less with computing devices of any kind than they do today. Imagine if, for example, I made this argument:
"In 1992, the default for using a personal computer was to both create and consume information, e.g. writing and reading email, writing and reading documents. Today the default is to just consume information, e.g. YouTube, TikTok, AppleTV."
That would be true, but not because of locking devices down. It would be true in because in the last thirty years, the industry had expanded the number of users and their use cases by orders of magnitude.
The people who sent and received emails are still sending and receiving emails. Same for the people writing and reading documents. But all those people are also now watching TV and YouTube and TikTok on computers instead of analogue televisions in their recreational time AROUND the documents and emails.
And there are many, many people who just want to consume content for every "maker" generating content of any type, whether it be programs, documents, videos, music, whatever.
How many people are involved in the construction and operation of the Webb telescope? How many people just want to see pictures of what it sees?
Makers are a small proportion of humanity, and even for makers, making is a small proportion of our use cases for tech.
The next thing then becomes, "Why can't we use GP computers for consuming all this content?"
And the tongue-in-cheek answer is, "Because Linux." Optimizing a device for makers often makes it sub-optimal for consumers.
I am a bassist. But I listen to music far more than I play music, and I have no interest in constructing a player-bass like a player-piano. For when I listen to music, closed-source "information appliance" ecosystems beat open-source general-purpose ecosystems.
I maintain quite a bit of general-purpose computing ways to manage music, but honestly, it's more because I have an aversion to corporate control than any thought that it's easier to be in complete control.
Joel Spolsky wrote that the key feature of Napster wasn't that music was free, it was that you could type the name of a song and listen to it right away. The challenge for us as technologists pursuing a free future is that information appliances do this better than general-purpose ecosystems.
The challenge that you describe is primarily political, not technological. It wouldn't matter one bit if Linux became the perfectly polished consumer OS today if its users are still locked out of DRM'd video services by their owners, for example.
> Joel Spolsky wrote that the key feature of Napster wasn't that music was free, it was that you could type the name of a song and listen to it right away.
If that paraphrase is true then Joel Spolsky has no idea what he's talking about on this subject.
Without a doubt the key feature of Napster was that students could download-- not stream-- music. For free. Students would consequently fill their harddrives with everything they thought they'd want to listen to in their lifetime, often buying 2nd harddrives to populate with more mp3s. (Well, that and pron.) Keep in mind many dorms were still using dialup connections during this period-- thus there was a pattern of students running to the library computer lab to download a few mp3 albums to zip disk (yes, zip disks) then bring them back to the dorms.
What facilitated immediate listening/viewing was sharing directories in Windows with the rest of the LAN on college campuses.
Quick digression to argue against the Gell-Mann Amnesia effect-- while your paraphrase of Joel Spolsky expresses an idea that is indeed false, I reserve judgement on anything else Spolsky has ever written (and frankly on whatever his verbatim words were on this same topic). I mean what kind of pathetic, impulsive nimrod would I become if I simply through out an entire body of someone's work on a single passing impression?
Edit: Just to cover my bases-- in every case I can remember, students who were playing music in their dorms or a shared space had winamp or some other such player loaded up with a playlist selected from thousands from their own collection. Napster was the place to download songs for your collection, not the place to build an ad-hoc playlist in realtime. Maybe there are cases where people were doing this. But the overwhelming supermajority of Napster users were using it because they could replicate a subset of the whole to build their own lifetime library of music. For free.
2nd edit: I almost forgot-- nearly everyone in the dorms would share a directory when they hooked up to the LAN. The process of finding immediately playable content was to browse the various shared directories in windows, or use Windows search on them which IIRC worked incredibly slow or not at all. This was a common practice because again, most people were still on dial-up and couldn't download anything from Napster nearly as fast as you could on the computer lab networks.
Later, Youtube made both music and video files immediately playable. Around that same time, torrent tech started to improve to the point where you could stream while downloading as well as do keyword searches with vastly improved results. This is all to say that Napster kicked off a pattern of college kids grabbing free content, and this proliferation of content caused the development of realtime playback of discoverable content.
So what you paraphrased above isn't exactly "wet streets caused rain." It's more like "the issue isn't the rain-- it's the wet streets." I'm honestly not sure which is worse. :)
15 years ago the default was to consume content from TVs, which at the time typically had minimal computing power, although many people used them along with locked-down devices such as TiVo. I don't have any idea what kind of poll you could build to answer this question, but my bet would be that people spend about as much time on a general-purpose computer as they did in 2007, and the increase in phone and tablet usage reflects a computerization of previously free time.
Sadly a lot of people prefer this over an open environment, as locked down environments usually work out of the box, consuming recommended information is easier than searching by yourself. Centralization brings convenience at the expense of freedom, but it is much easier for us to feel convenience (or being frustrated by something inconvenient) than to understand the importance of freedom.
I'm not sure if people prefer it, or that they are strongly steered towards that environment.
My apple watch is super cool from a hardware perspective, but so locked down that I can't use it in the ways that I would like to. For example, it has a barometric pressure sensor, but I/my apps can't directly read the sensor data, instead a filtered update is pushed to the app approximately every 1.5 seconds. Why? I know the sensor is capable of reading at 20+ hz.
So that someone doesn't write an app that polls at 20+ hz and burn the battery. Apple is optimized for user experience and simplicity, that includes precluding bad behavior. If you want a real time weather station, a smart watch is not the right tool.
My hot water kettle heats water, why can't I wire it up to be my whole house heater?
I mentioned it specifically because I write software for hobby devices that poll at 20hz that use literally the same Bosch sensors as apple. The use case is for gliding variometers (audio altimeters)
The sensor that is in the Apple Watch draws significantly less than 1ma when polled at 20 hz. Without an EE degree I have my devices, including the 90s era processor and piezo speaker, running for 100+ hours on a 150mah button cell.
I cite this example because I KNOW what is possible. This is a pure software issue.
I suspect that apple rate limits because the raw sensor data is quite noisy, and would look glitchy in a badly designed app. But there is a lot of signal in that noise that I want access to. Instead people in the gliding hobby spend hundreds to buy devices that have the same sensor package as an iPhone 6, but are able to access the sensors in a way that are useful.
Well, some of my friends prefer Apple's walled garden because they think the applications there are better than open source ones, and consider policies requiring Apple to allow third-party app store as dangerous to users...
There can be many 'why'-answers for such a context. While we can see 'a sensor' with some properties like data format, refresh rate etc. that sensor is a mere 'implementation' of a desired functionality of a product design.
It used to be that the designed feature or function would be very close to the implementation, but that really hasn't been the case for a very long time. People aren't buying "a large bank of memory addresses" but rather "a device that contains pictures", for lack of a better example.
With the watch a customer isn't actually buying a package of sensors, ARM-cores, BMS, Lithium-Ion battery, and display, but they are buying the experience of having a device that tells the time, notifies them if something happens and can track some aspects of their life so they have an overview of it later (be it for turning their life into a game or simply tracking their energy use/consumption). And then all of that for at least an entire day.
Why would the implementation of the feature result in a sensor that can be polled at a high frequency but is actually only pushed at a lower frequency? It's anyone's guess but here is my guess:
The sensor has its own specs but those are set in isolation and might differ based on implementation inside a casing, so they only way to get true data would be to have some form of calibration or offset where a low-power CPU core for sensor tasks just gets the raw values and applies the offset/calibration. Next, there is power consumption where they might have found the perfect balanced duty-cycle between data that has had enough time to cool down and be useful as well as power requirements for the sensor core and the sensor itself. So they have some sort of RT OS doing reading, processing etc. on a low-power core at a lower interval to get a 1% battery life increase. Do that for 10 sensors and suddenly it's worth it. It's quite an investment to have a team of people dive into the hardware, firmware and application development to do all that, so it's likely not a matter of "how can we spend a multi-million R&D chunk on making the hardware less useful", but rather some "how do we make millions of mass produced devices use a little bit less power" concept.
This is also where the push vs. pull comes from, instead of having every application do some interrupt or scheduling, you just ask to be part of a list of observers and when the data changes you get notified. Much more efficient, and if everyone has to do that, there is a much smaller chance of the user experience suddenly changing and support personnel (phone, in-store) getting complaints about something they can't fix because some third party app did it.
> Now, most people spend a majority of their time in a locked down environment (phones and tablets) consuming locked down content (subscription only, on approved devices). The default 15 years ago was open, now it is default closed.
I love having the choice. Most of the time I want the locked-down thing. A bunch of extra "freedom" I'm not actually using is just a hassle and liability when all I wanna do is take and crop some photos, or do some writing, or whatever.
Now we have that option, and then also general-purpose computers are so cheap and widely available that you can easily score decent ones for free.
Far from being some computing apocalypse, the situation now is excellent. Computers were a barely-tolerated near-failure for most normal use cases, but now there are options that don't suck for normal people wanting to do normal things. And general-purpose computing is more accessible than ever.
Maybe some Dark Age of Computing still lurks, but it's not happened yet, and things have only gotten better so far.
IMO the worst thing about the current computing era are the handful of Web monopolies and the way "AI"/machine-learning has mostly enabled systems working against us rather than creating agents working entirely on our behalf.
This exactly. Mobiles are now the standard general purpose computers and they're all locked down. We don't really own our devices anymore nor our data :(
More seriously, I wonder if we should look at the raw numbers. Before, perhaps all the people who use a mobile as a general purpose computer did not even have a computer. And likewise maybe more people have a PC-type device than before, but still represent a smaller fraction. I take heart in this.
> And likewise maybe more people have a PC-type device than before, but still represent a smaller fraction. I take heart in this.
I also question this whenever I see that "people mostly use locked-down devices now". But I don't have any numbers; honestly it wouldn't surprise me to be wrong.
> Edit: see e.g. the rise of the home lab movement
Sure, but care should be taken for this not to skew the numbers. That more people can now afford to have a mini-datacenter in their house – and therefore there being more computers out there – doesn't mean that there are more distinct people whose main computing device is a general-purpose computer.
To me, the people interested in the home lab movement are those who already were interested in doing "more" with computers, as opposed to just "consuming recommended content".
I know it's my case and I do it now because computers are cheaper, and it's easier to find quiet ones.
> Sure, but care should be taken for this not to skew the numbers. That more people can now afford to have a mini-datacenter in their house – and therefore there being more computers out there – doesn't mean that there are more distinct people whose main computing device is a general-purpose computer.
I've always done this even when I didn't have a lot of money. Server hardware has always been cheap because most of its audience won't even think about buying it secondhand, they will only buy new with warranty and 4hr support. I got $400 fibre channel cards for $10 because literally nobody wants them and companies throw out perfectly good cards when the warranty expires. It's a joke.
In the early 2000's I had Sun Fire and Netra's. I had an HP 9000 HP-UX box with 1GB (in the day when that was a ridiculous amount of memory). These days I have HPs.
A home lab has always been within reach. In fact I find it harder now due to energy consumption, as energy was always cheap.
Maybe I didn't know where to look in the early 2000s, but I do find it much easier now. Basically all of my home lab is what my work was going to throw away – so free.
However, the biggest issue to me, up until very recently, was noise. I have a ~4-year-old Lenovo Think Server that is quieter than some laptops. I also have my eye on a somewhat older HP rack-mount server that is also extremely quiet and should be decommissioned soon.
But up until around 2010, a rack-mount server would have driven me insane. Ditto for switches. Around 2012 I first saw a then-new HP model that was fanless. But it only did 100 Mbps, all the gigabit models came with a jet-engine attached. Now I managed to find a fanless Gigabit Ruckus with a few 10G ports, since they start to be old enough.
Throwaways from work were one major source of my home lab too. However for me this has become a lot harder in recent years, because our company scrapped its own datacenters and moved to the cloud. A hallway full of decommissioned servers is now extremely rare. We have some "computer rooms" left (not allowed to call them "datacenters" anymore) but it's just for the stuff that really must be on-site.
Other than that, online marketplaces. Not eBay generally, because its auction system and international reach drives prices up, even for items which normally gather low interest. I tend to use local buy & sell websites where people usually offer lower prices than advertised and these kinds of items are not very popular so they tend to go cheap.
I've never seen fanless servers, but my home lab is not something I keep running 24/7 anyway. And it sits in a dedicated room with my 3D printers and electronics workbench so it's not the kind of place I hang out for peace and quiet anyway :) It's my mancave really (though, for lack of a partner, right now my whole apartment is a mancave :) ).
My 24/7 stuff I do pick for energy-efficiency and to a much smaller extent, noise. I have 4 NUCs for this stuff. 2 nice ones with 4/6 cores and 64GB RAM, and 2 ancient ones (one atom and one skylake IIRC) which are very low power though. They're the ones that keep running when everything switches to UPS.
I'm not really big into networking so I have some semi-managed TP-Links that bought new for 35 bucks. They're gigabit, 8 ports and can do basic stuff like vlans and mirroring which is all I need. I'm not doing any CCNA stuff or anything.
I really didn't know anyone that didn't have a computer in the years 2000-2010. Even old people with hardly any PC knowledge. I fixed enough old crappy PC's for them :)
And yeah I know you can unlock Android but it's really really fringey. Most people don't. And that's like 95% most.
If an artist can't re-coup their costs in, say, thirty years, they are bad at business side of things and should get out of the way to allow other creative people to build on their work.
The novel Nightmare Alley was published in 1946, and the author died in 1962:
> If an artist can't re-coup their costs in, say, thirty years, they are bad at business side of things and should get out of the way to allow other creative people to build on their work.
Can we get something straight? In vast majority of cases it's the massive content industry corporations that are doing the cost recouping here, not the artists themselves. Artists get tiny peanuts or even less if they weren't known enough to fight corporate lawyers of their own publisher.
"Poor artists recouping their life work" is a nice PR message from publisher corporations to defend status quo though.
Not to take you two steps back but copyright doesn’t really apply well to software to begin with. Consider this:
Copyright applies only to creative expressions. Purely functional expressions are not copyrightable.
That means that software is the embodiment of both creative and functional expressions that are commingled. They are so mixed together by the time the buyer gets it, there is no simple way of separating the two. That weakens the strength of copyright over the whole.
It also misleads folks into believing everything about software is protected, even silly things like header files or trivial functions. Because these purely functional expressions are so highly intertwined with the rest, folks just assume they have a legitimate copyright over them! Even judges get this wrong!
So software isn’t inherently copyrightable except for it any creative expressions it contains. What counts as creative expression?
Traditionally, it didn’t take very much creative expression to be copyrightable. A screen design is copyrightable. A color scheme could be copyrightable. Code that saves a file after applying whatever edits the user has made is NOT copyrightable, as it is a purely functional expression.
But what if I came up with a cool/fast way of finding primes and coded it into C. I created it. It is unique in the entire world! It took many hours to come up with. Certainly I should be able to protect my work, right?
No. Mathematical algorithms are not creative expressions but purely functional expressions. It doesn’t matter how long it took to create or how the algorithm was discovered.
But what about the code to implement the expression? The code is the expression of the algorithm. There may be a bit of creativity in the code, but we are back to functional expression as opposed to creative expression. Where’s the line? It’s fuzzy enough considering it in the abstract but impossible to consider given only the executable software itself.
Personally, I think software should have it’s own copyright protections, unique from other written works. What that should look like and how it should work is an open question that should be discussed.
Remember, the purpose of copyright is not to enrich the author. The purpose is to “promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” If any copyright scheme fails to “promote the progress of science and useful arts”, it is not serving the public as intended.
Many people's only computer is locked to programs from a single source owned by the operating system developer, and adware impossible to remove without defating the device's security.
Where downloading apps from another source is possible, multiple levels of popups are triggered with warning messages optimised through A/B testing to minimise clickthrough rates.
Widewine and similar hardware DRM are now standard.
General purpose computing is still alive, but it's taken serious hits.
>Regardless of whether you think [things like 3D printed guns] are real problems or hysterical fears, they are, nevertheless, the political currency of lobbies and interest groups far more influential than Hollywood and big content. Every one of them will arrive at the same place: “Can’t you just make us a general-purpose computer that runs all the programs, except the ones that scare and anger us? Can’t you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?”
I don't think that was an accurate prediction of the past 10 years. Hollywood lost the copyright wars: even the most locked down TV dongle will happily play pirated movies with the right apps, often available from the official app store. SOPA failed, and to this day the US is free from IP and domain blocking (though many other democratic countries are now under court ordered blocking regimes). UEFI didn't kill the freedom to run any OS on your PC. User-modifiable device firmware wasn't banned, and thanks to the right to repair movement the law is actually being used to open up devices.
Instead, the threat to general purpose computing came from the tech industry itself: Apple, Google, and (to a lesser extent) Microsoft, the same companies that used their influence to kill SOPA, locked down devices and operating systems to protect their own profits, not to protect non-tech sectors or to appease regulators.