The biggest issue I've found while trying to do this is a very high rate of battery failure while plugged in 24/7 under load. 4/5 phones I've done this with have become spicy pillows in the 1-2 year timeframe. They were all Samsung or LG devices, so not complete bottom of the barrel garbage batteries either. IME if you need a cheap server, a used enterprise micro pc has a much lower chance of potentially burning your house down, can be found on ebay for literally $30, and can be easily stuffed full of ssds.
With most new phones not being designed to have swappable batteries like the S4 anymore, could this be modified by replacing the cells with a capacitor and then powering the phone from USB?
Would be worth an experiment potentially but it’ll depend on where the protection circuitry lives. A modern BMS chip is relatively complicated and will freak out if the attached “battery” is doing this that wouldn’t be safe for a LiPo cell (eg a capacitor will happily drain to 0V but a LiPo drained to 0V has probably been destroyed)
I plugged in an old iPhone SE into the Lightning port on a nice speaker and locked the screen to Spotify -- with auto-brightness it worked fantastically.
Until it also turned into a spicy pillow after about a year and a half.
Really wish there were some kind of system preference to run off wired power only and just not charge/discharge the battery at all.
Slashdot is where I found out about 9/11. I got up one morning and the NY Times (which I usually read first) wasn’t loading. Neither was CNN. After one or two more news sites, I shrugged and went to Slashdot, and that was where I saw the news.
OP probably refers to Lenovo ThinkCentre mini PCs; they're well made, quite expandable and cheap, but if you can't find any, check out used Chromeboxes. They can be unlocked and reflashed with Coreboot in a single step, which makes them more secure aside allowing the native install of your OS of choice. Hardware quality is usually very good.
Chromeboxes aside, I've installed various Linux distros on different MiniPCs and the outcome has always been great; the only caveat is to check if your particular model does like to be used without a monitor. In some cases they didn't and refused to boot, but I solved by connecting on the HDMI port one of those cheap "virtual monitor", "dummy monitor", etc. dongles sold to perform this exact function.
From the above site I landed on the Dell WYZE 7020 / Zx0Q... Some models of which have 4 cores (also accidentally bought a dual core version), supports 16GB DDR3, are passively cooled without a fan, and cost me about $50 on eBay (add $20 for a 16GB RAM kit to bring the total to $70)
It's about as fast as a Raspberry Pi 4, and has been great for letting me spin up a docker container to play around with a database or a message broker or what have you.
I limited my search to only fanless designs though; if you are fine with fans then I'm sure there are more performant options out there.
Sibling comment has good suggestions. I personally like Dell Optiplex models, e.g. 9020. Looks like right now the best you can do is about 40-45 shipped for a complete one, but I've snagged about six in the last year or two for 20-35. They come and go. Beware of sellers posting the same thing for 100+; they're worth max 50.
FWIW, Samsung batteries explode even if you don't use them as servers, and in fact it was an issue with their batteries being trash (I think I remember it being that they had been contaminated with metal filings, but I might be entirely misremembering the verdict).
While I don't think you're wrong to say that the trends/regime were the same then as today, that's very much more obvious in hindsight than it was at the time. You've got to recall the triumphal attitude in the West towards China circa the run-up to WTO admission in the late 90s. Read Clinton on the topic: https://www.iatp.org/sites/default/files/Full_Text_of_Clinto...
The trendy thing was to believe that exposure to global markets and the Internet would inevitably result in further liberalization, and anyone who disagreed was probably a car-burning anarchist. It was only in the 2010s that the fact that reality wasn't quite that beneficent started creeping into mainstream neoliberal perspectives.
To be fair, China really had seen massive changes under Deng, so the idea that continued reform would eventually result in major political liberalisation wasn't completely insane. It went from Mao to a semblance of rule of (in retrospect, obviously "by" rather than "of", but whatever) law, so was extending the trend line really so implausible?
In retrospect, obviously, yes, it was implausible. But that general vibe meant that many really did think that the Sino-British Joint Declaration would be respected.
Of course, the actual history of how that whole thing turned out w.r.t. tearing up the treaty and burning the bits in the fires of national-security laws, has altered perspectives on how well Taiwan would cope under rule from Beijing. Now that the concept of one country, two systems - that idea of a special status - is widely understood to be a meaningless platitude, the assumptions that were common at the time of HK reunification are totally invalid. You can see this in trends of polling of Taiwanese attitudes to reunification: as HK has been strangled, the Taiwanese public has come to understand the worthlesness of any of the sort of assurances that were offered to HK, which at the time assauged a lot of objections.
Finally, there's a matter of basic military calculus: HK is a lot harder to defend than Taiwan. It's not a natural polity grounded in geography, and not potentially self-sufficient in the same ways as Taiwan. Thatcher didn't agree in 1984 because she loved and trusted Deng, but because he could credibly have taken it by force in a day and told her so, so, in a very real sense, there wasn't much of an alternative.
It's been kind of enlightening seeing leadership at $BIGCORP push AI coding solutions like they're guaranteed to be a 10x increase in velocity in every context. Feedback from ICs isn't wholly negative - there are definitely situations where it can be useful, like quickly grokking common applications of common tools, or semi-intelligently applying a diff pattern that is more then just a regex - but there's a complete unwillingness to hear any feedback that isn't "this tech is a total paradigm shift that allows us to finally get rid of all these pesky and expensive developers". Reports of, for instance, the introduction of subtle bugs that take extended amounts of time to understand and fix, are met with outright hostility and accusations of incompetence. When a complex defect or escalation drags on, a common question is "why haven't you asked AI to fix it yet", belying a total misunderstanding of the sorts of tasks the tool is applicable to. The kool-aid is not so much drunk as rectally infused. If valuations are based on this sort of outlook, whew, this market is totally fucked.
I think a weird irony is that the model's inability to know when its response is good is both the reason why often the output is not useful, and why when it's very useful, they can't capture the value efficiently.
Like, I was encouraged to use AI assistants more after a colleague saved a bunch of time debugging some issue where copilot (IIRC) immediately identified an obscure issue. Probably in that case, we should have been willing to pay a decent amount for that one valuable response -- it may have saved a significant amount of engineer time. But I've also had copilot give me stuff that isn't even syntactically correct, or had copilot chat make up a newer version of a language and tell me to use it. Cases where it's a waste of time are worth negative dollars.
Sounds like a good ol’ fashioned case of confirmation bias. ‘Look at this one good suggestion the AI made! Wow!’… all while ignoring the many unhelpful outputs.
I don't think it's just confirmation bias where we ignore some bad results (which presumes we know up front that they're bad) -- I think because these models are specifically RLHFed to learn what we think looks good, you can't judge quality just by looking at the outputs and deciding whether they seem plausible. You actually have to do the follow-up of seeing whether they're correct/useful, which may be much more involved.
E.g. to judge the quality of a particular coding example, one may need to have/create a project in which that code would be used, install actual libraries it invokes, create data for it to operate on, etc. In cases where the assistant was basically giving me wrong information about scala 3 metaprogramming capabilities, I could only determine they were BS by actually trying to compile the program (in the context of a project with sbt config that pulls in some relevant libraries, sets appropriate flags etc).
But of course the model doesn't do this, the high-level exec doesn't do this, and so "these examples look great!" can be an honest evaluation, based on the inability to actually meaningfully validate.
Love the example of the guy using an llm all day to make a simple crud app. Basic auto generated crud apps have existed forever. I still remember showing my boss my django admin built in a day back in 2005. He told me to tell no one about this because he was afraid he would have to layoff devs.
But generally yes, latency is too high and bandwidth is too low for synchronous voice. The upside is that real-world performance often exceeds the range of analog UHF even at significantly lower frequencies, e.g. GMRS.
Re ISM band usefulness: yeah. I grudgingly admit that the propagation characteristics of HF and probably even VHF aren't a great match for the ISM governance model, but at least everywhere I've ever used radios, UHF HAM bands are as dead as a doornail and don't (fine, generally) propagate far enough for small numbers of negligent users to create major problems for large amounts of users. Take some 400mhz UHF and make it ISM. 902-928mhz ISM has a ton of great stuff like LoRa going on, but access to 400mhz bands would be a huge shot in the arm to the practical usability of projects like Meshtastic without significant risks to the general usability of that chunk of spectrum.
Another interesting thing about the history of protocol development in ISM bands is how the need to cope with noise/overuse/other forms of spectrum degradation has spurred enormous advances in encoding and signal-processing techniques, quite contrary to the initial expectation of useless garbage dumping zone bands.
Gotta love how they are pushing full steam ahead for the technology and leaving export a to-be-solved problem. Oh, except the cloud vendors have full rights to backup our keys.
An interesting facet of this that makes me feel somewhat more charitable to cashless payments in that context is that it's very easy to perform cashless payments in Japan pseudoanonymously, in that you can buy an IC card from a vending machine, load it with cash, and use it for payments all without explicit links to your KYC'd banking identity. Sure, if you attracted individualized security state attention they could probably pull security camera footage and figure it out, but that approach doesn't scale into turnkey tyrrany; sort of how like $5 wrench cryptanalysis doesn't prevent widespread strong cryptography fundamentally altering the envelope of what kinds of automated, at-scale privacy abuses are possible.
This is a fairly stark contrast with the West, where pseudoanonymous cashless payments basically don't exist. The closest thing (stuff like xmr aside, where the chance you can use it to buy a coffee is very low) are prepaid visa gift cards, but the UX on these is terrible for a lot of reasons and so aren't anywhere close to the practical usability of cash-charged IC cards in Japan.