IBM did ship a few generations of Itanium hardware, they just smartly never bet the farm on it.
MIPS and SPARC were always a little weak versus contemporaries, if SGI had forestalled a bit with the R18k that would have been enough time to read the tea leaves and jump to Opteron instead.
PA-RISC and Alpha had big enough captive markets and some legs left that got pulled out too soon. That paradoxically might had led to a healthier Sun that went all in on Opteron.
You are casually transitioning between something that is pretty well understood through the history of online commercial computing (i.e. 1970s): 1) owned hardware versus leased/timeshared/rented hardware or services and 2) concepts of financial instruments that many readers here are probably not intimately familiar with.
If I needed a generator to run a remote mining operation, and you just told me to just buy energy futures instead, we'd be having a silly discussion. Whether it makes sense for me to rent or buy the generator has more to do with governments, [,tax ]laws, and risks that ultimately manifest as cashflow decisions. You have some valid thread you are pulling on for what are the economics of general purpose compute and to whom, but your argument needs a lot more care to carefully define and make your case and why it is okay to dismiss the outlier cases for instance.
> If I needed a generator to run a remote mining operation, and you just told me to just buy energy futures instead, we'd be having a silly discussion.
Exactly. Just because they are similar in some senses doesn't mean that they're fungible. Generator manufacturers still have a business even though you can purchase energy futures.
It would be like if you needed to buy a generator versus, I will give you a giant cable that has all the same economics as buying a generator. In that case yeah, if I am giving you all the same economics as buying your own generator, but cheaper, you would be stupid not to take my deal. It’s got nothing to do with energy futures. My suggestion is to copy what I am saying to a chatbot, and ask it what “looks like a duck, quacks like a duck” means, and what’s going on here.
"giant cable" -- this kind of thing does obviously happen, rural electrification is heavily subsidized by governments (see the rest of my comment..). But it is also not a realistic option in plenty of cases as such and therefore industrial companies like Cat and Cummins are happy to sell multi-million dollar generators. If your approach to the subject is to copy it into a chatbot it might explain why this discussion is ultimately specious, because a casual and even jocular approach to this is not really adequate to make any point.
The NS UI looked really odd to me until relatively recently having grown up on Mac OS Classic, Win9x. Now I look at that and think, damn.. that aged well.
It's probably lost to history now, but portions of Windows 95 drew partial inspiration from earlier versions of NEXTSTEP, such as the window buttons and titlebar.
So OpenStep for Windows NT ran under windows NT and it looked like Win32 GDI but it wasn't Win32 GDI. It was still display postscript but the GUI elements (NSButtons, etc) were themed to look like Windows NT instead of NextStep. For whatever reason they were included in Mac OS AppKit as well.
Yes, but you pay a real cost for those choices too. A management plane that is non deterministic, imperative, and full of highly mutable state, not to mention basic stuff like the package manager metadata and cache not being shareable, and package installs all having to be serialized because they all call shell scripts as root. These limitations constrain even tools like dagger from providing a first class interface to apt like there is for apk because any deb could have rm -rf / as the postinstall script.
A lot of normal users don’t feel these pain points or tolerate them by sidestepping the whole affair with containers or VM images. But if you’re in a position where these things have an impact it can be extremely motivating to seek out others who are willing to experiment with different ways of doing things.
I’m assuming a friendly tone here, and in a similar tone its funny because I also think Nix is not adopted because its benefits just aren't worth the cost to users (devs)
I did indeed deploy Nix to moderate success in a prior gig, but have held back pushing it at my current one; we're simply not at the scale where the problems that Nix solves are worth the cost (yet, maybe ever).
For a less controversial take, consider alpine's apk package manager. For a single-use container that runs one utility in an early dockerfile stage, apk can probably produce that image in 2-3 seconds, whereas for an apt-based container it's more like 30 seconds. That may not matter in the grand scheme of things or with layer caching or whatever, but sometimes it really does.
For a while in the 2000s Cisco was one of the biggest users of FPGAs. If you consider how complicated digital designs have been for many decades, and the costs of associated failures, FPGAs can certainly be cost neutral at scale, especially accounting for risk and reputational damage, into production lines.
Also there is a large gamut and pretty much always has been for decades of programmable logic.. some useful parts are not much more than a mid range microcontroller. The top end is for DoD, system emulation, novel frontier/capture regimes (like "AI", autonomous vehicles).. few people ever work on those compared to the cheaper parts.
FPGAs are still quite common in niche hardware like oscilloscopes or cell towers, where the manufacturer needs some sophisticated hardware capabilities but isn't manufacturing enough units to make the NRE for an ASIC worthwhile.
Also time to market - I have a friend who worked for Alcatel Lucent and they would use FPGAs while Nokia would use ASICs, they saw it as a big advantage since if there was a problem in part of the ASIC, or if you needed new features that were outside the original scope, the time and cost to respin was massive over fixing problems or implementing new standards in the FPGA bitstream!
Eventually Nokia ended up buying Alcatel Lucent and not too long after he left, not sure what their current strategy is.
FPGAs are way superior running a lot of parallel pipelined logic. They can have SERDES that can communicate at gigabits per second. You can build logic that reacts in nanoseconds to external I/O with zero jitter.
They're not just moving packets around, they're producing streams of data that pass through dozens of stages and end up being spit out into a DAC to produce radio signals (and vise-versa, signals into an ADC and then through many demodulation stages).
All the modulation, demodulation, framing, scrambling, forward error correction coding/decoding, etc. has to all happen continuously at the same time.
There are some open source software defined radios that can do that for one or two stations on a CPU at low data rates, but it's basically impossible with current CPUs for anything like the number of stations (phones) that are handled in one FPGA with decent data rates, latency etc.
You'd probably need a server rack's worth of servers and hundreds of times the power consumption to do what's happening in the one chip.
They really aren't slow, but the performance and battery life is greatly eclipsed by Apple ARM. I could live pretty comfortably on a Lenovo P51 (something like a 2017 MBP) if I had to under Linux and FreeBSD. Also a not negligible amount of performance was lost by the security gaffes and microcode and OS mitigations for them.
Makes sense, there are a lot of reasons why having some "big iron" might have been practical in that era. x86 was not a full contender for many workloads until amd64, and a lot of the shared-nothing software approaches were not really there until later.
If you have enough to worry about someone beating out of you, maybe putting some into professional multiparty custodial systems and/or one or more cold wallets with trustees is a good idea. This idea scales fine with geopolitical risk.
Your "hot wallet" should be like cash, no more than you are prepared to lose/surrender at once.
None of these things are mutually exclusive. Holding a large pile of any one country's fiat is probably the dumbest move. Ownership of physical assets that generate revenue is the smartest.
And to add: Your "hot wallet" being bank issued credit cards for everyday purchases or emergencies that you are prepared to lose/surrender the moment someone tells you to hand over your wallet.
Later log into the accounts, flip the toggle to stolen/lost and mark unauthorized purchases if there are any. Then sleep peacefully knowing new credit cards are in the mail and you are only out the cost of the physical wallet holding the cards that were stolen.
Most people of any significant wealth would have made the delegation long ago to private client banking where a team of people overlook all aspects of the accounts. So yes, you are a fool not to if you have the level of wealth proportional to having it beaten out of you in your geopolitical region.
A custodial service is a bank that operates on a different network and is not FDIC insured (which only covers $250k). It could be insured privately. The interest on an FDIC deposit account is well below true inflation of fiat currencies.
MIPS and SPARC were always a little weak versus contemporaries, if SGI had forestalled a bit with the R18k that would have been enough time to read the tea leaves and jump to Opteron instead.
PA-RISC and Alpha had big enough captive markets and some legs left that got pulled out too soon. That paradoxically might had led to a healthier Sun that went all in on Opteron.