Something that just dawned on me is how far we have come regarding the compute resources that are available to the average person living in the developed world. Consider the hardware that Larry Page and Sergey Brin used to start Google in 1998. According to this page (https://blog.codinghorror.com/google-hardware-circa-1999/), they had 10 processor cores running at speeds of 200-333MHz, 1.7GB RAM, and 366GB of distributed hard disk storage. This configuration probably cost them a minimum of $10,000 to build, probably more. Now consider the Raspberry Pi 4. If one spends $45 on the 2GB RAM variant and additional money for a 512 GB drive, then for roughly $100 he or she would have the same compute and storage resources that Larry Page and Sergey Brin had in 1998 that started one of the world's most successful Web companies. In fact, our smartphones can drive 1998-era Google if configured with enough storage.
From a software standpoint, imagine the possibilities of millions of people walking around with devices that are as powerful as the compute resources Google had in 1998. It's realizations such as this that make me excited
What I love about the Raspberry Pi is the possibilities it brings at affordable prices. For example, students learning about how distributed systems work can build a cluster of Raspberry Pis for just a few hundred dollars. They have access to the same open source software that major tech companies use for their infrastructure, like Linux and various distributed software projects such as Apache Spark.
In an age where sometimes I'm cynical about the direction of tech and our industry, it's realizations such as this and product announcements like this new Raspberry Pi that make me remember why I love computing.
On the other hand, look at how ridiculously powerful our computers are and they're still so slow. We've met orders of magnitude more computing power with orders of magnitude slower software.
I was amused to read this morning that Microsoft's new Terminal client has "GPU accelerated text rendering engine". WT actual F??? You can't run a terminal window without a few Gig of video ram and a couple of teraflops of GPU horsepower??? Boggle!
And the main reason was to save memory and computing power. And even today, a GPU is designed to do graphics, that's in the name. Text rendering is graphics, so why not use it? It is more efficient than a CPU, and the reason it isn't done more isn't because one needs huge GPUs, but rather because it is more difficult for the devs.
Because CPUs today are powerhouses and text is a rather simple task, devs take the easy way and do text rendering on the CPU. But when you throw Unicode, anti-aliasing and high resolution into the mix, things that modern terminals should support, it stops being so simple, and the GPU regains value.
If you're on, say, a 4K 10-bit display, there's quite a bit more pixel data to push than there used to be. You still don't need a GPU just to draw text, but since you already have one, using it will provide better performance and likely consume less power.
Since i already have one i might want to use it for other things. The reason computers are slow is this sort of "since you already have that resource, might as well use it" thinking - which makes sense if only one program does it, but if almost all programs do it then it breaks down quickly.
The same could be said for the CPU, and this helps free up the CPU for other things. It's a trade-off.
But the underlying problem here is higher resolution screens than needed. Most people don't actually need a 4k display. Sometimes, they can't really see small print that easily anyway and what they need is UI's designed with large print in mind.
CPUs are more generic though and pretty much every single GPU accelerated text drawing operation i've seen allocates permanent GPU resources (textures mainly). It isn't impossible to not do that, but if you only allocate the necessary resources for the GPU on an as-needed basis and then release them once you're done, you are introducing latency which invalidates any gains you may have. The alacritty terminal linked elsewhere in this post, for example, keeps a bunch of atlases around with hundreds of glyphs which are local to each instance of the program (thus using both CPU and GPU resources) and for macOS and Windows ignoring any system-wide glyph caching the APIs it uses may already have (caches that will be created - and thus resource allocated - anyway when it tries to rasterize those glyphs for its own use).
FWIW yeah, i agree that most people do not really need 4K displays but that is another matter.
Which is exactly why you want to save the GPU for what it does best – drawing pixels.
> and pretty much every single GPU accelerated text drawing operation i've seen allocates permanent GPU resources (textures mainly).
Makes sense as a concern, and it's not something I've looked into. (I don't use alacritty.) On the other hand, how much memory are we actually talking about? On my system (total screen resolution 2880x1800), a typical terminal glyph has a roughly 14x16 bounding box; let's bump that up to 20x20 to account for padding. Stored as 8-bit RGBA, that would take 1600 bytes. An atlas of "hundreds" of glyphs would then be expected to take up on the order of hundreds of KB... which seems pretty negligible? A larger font, multiple atlases, or more characters per atlas would require more memory, but I still don't see how you get to an amount worth worrying about. I could be missing something.
One important thing you are missing is that you are focusing on a single instance of a single program doing that. These caches are not shared among programs and unless you are only running a single program at a time in your OS, if every program does such resource abuse (not necessarily this particular type of abuse, but in not caring about resources in general) then you get a slow computer.
Computers feel as slow as ever (which was the topic a few nodes above) despite being much faster in theory not because of a single program but because all (or well, the overwhelming majority) the programs in your computer abuse resources - even if a little. It is death from a thousand little abuses.
I'm sure my rose-tinted glasses memories of the olden days are entirely unfounded, but I've written code using a VT100 plugged into a VAX 11/780, and the first machine I had at home was an Osbourne 1, with a 4kHz Z80 processor and 64kb of ram (the top 4kb of which was the memory mapped video ram)...
It seems the increase in utility we've got from being able to use teraflops of massively parallel GPU with Gigabytes of ram available - doesn't exactly match the 6 orders of magnitude or so more resource use... "It scrolls soooo smoothly, and you can use the poop emoji in your source code!!!" ;-)
A large terminal window is a surprisingly large amount of pixels. If you want to keep up with fast scrolling text you need some tricks. But GPU accelerated doesn’t necessarily mean you need gigabytes of memory or teraflops of compute power.
You don't in most cases, buy in the case of en enhanced terminal with things like full color emojis, and smooth scrolling rather than line by line, then I'm sure GPU acceleration will be welcome.
I thought about this the other day after I ripped on a similar concept. Yes, at first it seems like an incredibly dumb waste of resources. This is true if the rendering is used to only add useless fluff and effects to the rendering process. However, the G in GPU stands for graphics and accelerated graphics is nothing new. An early example I remember are the windows accelerator cards. https://en.wikipedia.org/wiki/Windows_accelerator
Though let's be honest here, if the OS were designed properly then the terminal application would not need to worry if there is a GPU or not. The graphics libraries and drivers should take care of all that.
This is old news. Pretty usual situation for the MS world though. Take a look at alacritty.
In my experience GPU rendering does provide buttery smooth scrolling and output for tons and tons of text. Now you can `cat` that 1 GiB log file that much faster!
I hear this claim almost every day yet its been years since I've actually felt my normal work computers being slow. Hell, even booting an OS in an emulator running in a browser proper software still runs blazingly fast. It's just that some people seem to use an awful lot of terrible software but I don't seem to be one of those people
You were probably also using several thousand dollar computers.
Try making the same claim using a thrift store laptop for < $200 which is all the computing power a whole lot of people have access to (if any). Yes computing is fine (and a little better than 20 years ago) for software made and used by wealthy people. You have to slide down the curve a bit to discover the frustration with everything being terribly hungry for ever-increasing resources.
Isn't that directly contrary to GP's claim of "look at how ridiculously powerful our computers are and they're still so slow" though? Of course slow computers are slow.
Just the other day there was a thread comparing start times for LibreOffice, where 3 seconds was considered good. Three seconds on modern hardware is a damned eternity. How much does LibreOffice do that Word 5.0 doesn't, really? Yet the latter would probably start inside an emulator faster than the former does natively.
LibreOffice and its predecessors (OpenOffice and StarOffice) have always been considered bloated for as long as I could remember, which is nearly 20 years. While I'm very grateful for having a free, open source office suite, I've always found OpenOffice and LibreOffice to be rather clunky no matter what platform I'm using. This is especially true on macOS where a lot of the UI elements of LibreOffice don't fit in with regular Mac applications, although thankfully things have progressed from the days of having to run the even-slower NeoOffice on Mac OS X Tiger, which used the Java runtime (I remember it taking up to 15 seconds to load on my 1.83GHz Core Duo MacBook back in 2006). It might have to do with how LibreOffice decided to handle its UI elements. Writing cross-platform software is hard, and it often results in making compromises that affect conformance with specific platform UI guidelines and performance (consider how controversial Electron apps are with some users, for example).
But even with LibreOffice's clunkiness, I still use it at home. While I am partial to Apple Keynote for presentations, I prefer LibreOffice Writer and LibreOffice Calc to Apple Pages and Apple Numbers, respectively. And whenever I'm on a Linux or FreeBSD machine, LibreOffice is available for me to use, while iWork and Microsoft Office are not options.
> How much does LibreOffice do that Word 5.0 doesn't, really?
It has to support and parse a billion more formats. Including the fact that MS word file formats are in some cases literal dumps from word's memory (iirc).
Rendering things properly is also a difficult problem to solve. Look how difficult it has been for much of the 21st century, to get a webpage uniform across all major browsers, and then realised that you not only have to display the same across all major browsers, but also be 30 years backwards compatible.
On a modern OS (like anything in the last 25-odd years that has demand paging) that code isn't even loaded into RAM before it's used in many situations.
While that is an additional load if you deal with those formats, the root of things like startup and runtime bloat lie elsewhere.
When do you want to incur the overhead? Do you want a user to wait once for 40 seconds, or wait 10 extra seconds for something to load/save because you're loading shit on the fly. It's a question of trade-offs.
i've always wondered what would happen to this sentiment if something like V8, Node, Electron or NW.js was already part of the [Windows, Linux, macOS] platforms and didn't need to be distributed with every app.
I was wondering about the exact same thing this morning! I came to the conclusion that every OS vendor would probably prefer to ship their preferred engine, and so we'd end up with... websites
Blame abstractions, and the fact that we expect a higher graphical polish to our programs and even just the desktop. Booting a Raspberry Pi straight into terminal is pretty quick.
Just like when you have a backpack slightly too big for purpose and you tend to add in useless junk, so too do people add useless software when available compute/memory is superfluous. Because you can.
Thats orders of magnitude of other people's software running for other people on our phones and laptops. If it were just the software we wanted, the adware, ransomware, click-baiting, click-jacking, pop ups, and such wouldnt need yet more software to fight.
Noticing this with cloud capacity too. Employers throws 150 bucks of credit at me and I don’t really use it to even 10% of potential - me learning is the limitation not compute power
Consider the Cray-1 supercomputer of the early 1980s - 100 MFLOPS for US$10M. A modern Intel processor will do 100 GFLOPS (1000x faster) for a couple of hundred bucks.
> A modern Intel processor will do 100 GFLOPS (1000x faster) for a couple of hundred bucks.
Yes. A slight correction, though: 100 GFLOPS per CPU core.
Some new server Xeons can do 64 single precision FLOPS per cycle per core. Most other modern x86 CPUs can do up to 32 SP FLOPS per cycle per core.
AFAIK, RPi4's Cortex A72 will have to do with just 8 FLOPS/cycle/core. Similar to original Intel Core, Nehalem and Penryn. FPU wise (and probably otherwise as well), per clock cycle, RPi4 is not far from the famous Core 2 Quad Q6600. :-)
Or one Cortex A72 core @1.5 GHz has comparable floating point performance as a hypothetical Pentium 4 at 6 GHz.
(These figures count one FPU multiply-add as two FLOPS, but that seems to be industry standard way...)
With the right amount of drinks, I'd argue that architecturally a Cray1 is "spiritually closer" to a modern video card that a CPU... So you can get 4-5 teraflops for ~$500 that way...
netboot on the Pi 4 is only going to be added in a future firmware update. Netboot is critical to the operation of our cloud, as it prevents customers from bricking the servers. Our dreams were shattered.
This is unfortunate, its something I use a lot. Guess I'll wait to get a Pi4.
I wonder if they could have used something like petitboot? That way they don't depend on Pi4 firmware for netbooting. It does require a sd card, but it's read only and tiny.
There is also iPXE, where a solution would be to use a small ipxe iso/lkrn and combine it with a boot loader written to a sd and then bootstrap the system over the network from there.
I get that they're using the Pi in a production environment for small-sized-discrete-hardware hosting, but given the nature of the Pi and it's community, the tone of this article is confusing to me.
The Pi is great for things like DIY HTPC's, kiosk displays, IoT controllers, education, etc... but using it how this service is using it seems wrong--or misused--for some reason. I feel like an offensive stance is being taken against the Pi 4 for not being client production ready, when it seems like the foundation's attitude is if you want to go full client production, use the compute module.
The original purpose of the original ARM chip was to drive the Acorn Archimedes desktop computer. While they were aiming to design a power efficient device, the power consumption accidentally turned out to be far lower than intended, which has been a big reason for ARM's continued success in many uses.
The team's design goal was 1 watt, but the chip ended up needing only a tenth of a watt. In fact on the original testbed they forgot to wire up the power lines to the chip, but the processor still worked, appearing to be running on no power at all! It turned out that it needed so little power that it could run on just the leakage from the data lines.
I've been running a K8s on the 3B+ for over a year and it works, just barely (http://pidramble.com). The one major constraint, and almost always the cause of control plane outages, was the 1 GB of RAM on the master. Now that I can get a Pi with 4 GB I think it will be a lot more resilient!
Note that the other Pis did just fine running typical workloads, as long as I kept the overall deployment memory constraints in check.
That seems to be the general complaint... running K8s on less than 1GB ram seems to be unmanageable for the most part... with the new models, it might actually be a good idea to try a few clusters of these.
While I agree I'd like more open hardware (And Olimex comes to mind here as a vendor trying to deliver) but the reality is no one truly hits that fully open source benchmark for general purpose SBCs, yet.
This shouldn't disqualify the RPi, in my opinion. We're moving incrementally towards a world of open firmware and open hardware from a totally closed world. It feels like we're making forward progress. We shouldn't punish incremental steps on this path, nor should we stop asking for more open hardware.
As far as I know, the TI-based stuff (Beagle<foo> and others) is all open with the possible exception of the accelerated 3D video driver. I can get volume pricing and delivery dates on all parts. I can get documentation on latencies and functional units.
That's a pretty big difference.
All this means that people serious about a product have to avoid a RaspberryPi, and this is terribly unfortunate because it has such a wonderful community.
If you don't need any form of accelerated graphics then yeah, you don't want a pi and it's easier to get a more open board architecture. If you're making your own product tho it's way more likely you're not just gonna by someone else's board and jam it in there.
This feels like a dramatically different category of thing than what the Pi is, to the point where comparison starts to get less meaningful.
But frankly I'm not sure the RPi folks should give a damn. They're selling whole computers. That they don't fit into someone else's product pipeline is unsurprising.
It's like complaining you can't turn a MacBook air into a blade. It's true, but it's also a bit wrong to expect you could have in the first place.
If you have the financial and engineering resources to deliver a quarter of a million of anything, then the Raspberry Pi is completely irrelevant. Broadcom (or one of their rivals) will very happily work with you to develop your product.
The whole point of Raspberry Pi is to provide a convenient SBC for people who don't have those engineering resources. If you're somewhere in the middle, there's the Compute Module.
This is a feature. The manufacturer has made an intentional choice to not cater to people trying to build products with Pis. They're an educational non-profit.
Complaining about this is akin to complaining that your VW bus can't tow a house.
Easy to say, but the reality is that you will be destroyed (or at least threatened) in the marketplace by your competitor who paid 1/4 what you did for an SBC.
Remember that boards like the RPi are either loss leaders for the SoC vendor, or a way to offload excess capacity. Their pricing does not necessarily reflect what an actual customer for an actual SBC product will pay, regardless of scale.
You making profits off a charity organization is not the intended purpose of the Raspberry Pi foundation. Not getting 250k of those for your product is working as intended. You also shouldnt open a for profit restaurant cooking stuff you get from the food bank.
The question isn't whether you can or can't, the question is whether your "real" product has a "real" controller board, for values of "real" that conform to the opinion of someone known as "rhinoceraptor" on Hacker News.
A competitor who doesn't worry about such details will have an advantage in many markets.
At one time, the denizens of a site called "Hacker News" might have been expected to grasp this idea intuitively.
You seem to have misunderstood rhinoceraptor's point given the context they've responded to.
The post they responded to said that the Raspberry Pi is unsuitable to all "real products" because they can't buy 250k of them - you might actually be arguing against their definition of "real product"? rhinoceraptor merely pointed out that that isn't the goal of the Raspberry Pi, and one thus should use something else if it doesn't fit the requirements. If that's your situation, your competitors will have the same problem. If it isn't, then "use something else" doesn't apply to you.
Have you looked at the cute pinkish red and white cases they sell, and the branding, and, well, everything? The raspberry pi is a toy. Just a really cool educational toy at that.
There are a couple of options available to you. You can use the compute module or contract Element14 to manufacture a customized raspberry pi for you.
But as others have said it is not a problem as far as the Raspberry Pi foundation is concerned because their primary mission isn't to produce a board for use in commercial products.
I don't know about your local distributor, but RS Components will sell you as many as you like; they've currently got 47,400 3B+ boards in stock, no maximum order and a price break at full box quantities (150).
Toradex modules need just one or two voltages. RPi compute modules need like... five or so? Much extra circuitry. Toradex modules also have availability guarantees. I've seen both reasons rule out RPi compute modules.
From a software standpoint, imagine the possibilities of millions of people walking around with devices that are as powerful as the compute resources Google had in 1998. It's realizations such as this that make me excited
What I love about the Raspberry Pi is the possibilities it brings at affordable prices. For example, students learning about how distributed systems work can build a cluster of Raspberry Pis for just a few hundred dollars. They have access to the same open source software that major tech companies use for their infrastructure, like Linux and various distributed software projects such as Apache Spark.
In an age where sometimes I'm cynical about the direction of tech and our industry, it's realizations such as this and product announcements like this new Raspberry Pi that make me remember why I love computing.