I cut my teeth on Sun SPARCstations. They were my first serious computer love, introduced to me at Rutgers University a good *%#+ years ago. All I had owned at home was a couple Commodore 64s and a 286 my dad had for work.
Buried in the basement of a building on campus, with huge black and white CRTs (21" or bigger), and laser mice that required a mirror-polished mouse pad, a nice upper-class student took pity on me and introduced me to UNIX when I started bothering him to share his .xinitrc file with me (his prompt was decidedly cooler than mine, and back then, Pine was how we got email, so everyone got to use a terminal a little bit, on Mac, Windows, or UNIX. That was neat, because even the least-interested students were semi-comfortable sitting down at one of the UNIX machines).
Sorry, not a thing to do with the article, it just sparked a warm memory. They're why I use a Mac now - it's the prettiest UNIX (IMHO).
Every now and again I'll open Pine just for the nostalgia of it.
Let's see it vs a SPARC with same clockrate and/or SMP for same no of cores. Still not apples to apples but closer. Maybe some Ultrasparc 3's or something.
Throw in some Alphas for good measure. Then put critical routunes in PALcode. See what happens. ;)
IMHO 'the newer Suns' (as in the pizza box things that are now 2 decades old) always felt slow compared to other boxes I was using at the time. The OpenLook window manager was also a bit naff compared to what else was going.
I preferred Silicon Graphics hardware back then, the user interface was as cool and responsive as it gets rather than the juddery OpenLook effort the Sun machines had.
I stayed working with SGI machines when the PC's started to get really fast - Ghz clock speeds rather than the few hundred Mhz that RISC machines did back then. I felt RISC should have the higher clock speed, but it wasn't to be and SGI and other workstations died a death shortly thereafter.
During those end times I spke to some SGI engineer who tried to persuade me that it really wasn't about clock speed, which was true on SGI as there was a lot you could do in the many hardware boards a typical SGI machine came with. For most actual real work the graphics pipeline would be maxed out with the CPUs (of which there could be many) doing very little.
So my Raspberry Pi vs... would be Raspberry Pi vs SGI Onyx 'Infinite Reality' desk-side super-computer from some time in the mid 1990's. These usually cost real money and were tricked out - I had one with 256Mb of RAM and 64Mb of graphics RAM. Nobody had that much memory back then!!! Yet would anyone want the 256Mb PI when there is the 512 Mb version?
"I preferred Silicon Graphics hardware back then, the user interface was as cool and responsive as it gets rather than the juddery OpenLook effort the Sun machines had."
Bro, Silicon Graphics was the shit! I was studying games, AI, and supercomputing back then. Their Octane workstations with dual 200MHz MIPS plus innovative graphics were totally smoking nearly GHz Intel rigs. The NUMAlinks on Origin were giving 100+GB of RAM, dozens of CPU's, and GB/s bandwidth at microsecond latencies. They threw a room full of Onyx2 NUMA's together to blow our minds with [the graphics of] Final Fantasy. They later put FPGA's ("RASC") on NUMAlink in Altix machines to show what FPGA's could really do as accelerators if PCI bottleneck wasn't there. Anything those people did was mind-blowing for a time.
Their legacy lives on in XFS, the NUMA extensions to Linux, that mainstream graphics adopted similar interconnect strategy, and that Intel's about to one-up their FPGA scheme with Altera acquisition. Not a bad list. I want them to spin off an older NUMAlink chip as a commodity for building x86 NUMA's. I still want me a MPP system with single-system image and hundreds of GB RAM on the cheap. It's 2015 but my 1990's dream still hasn't come true. Getting closer.
Btw, here's the two architectures they used if you were interested in technical aspects. It's so out of date that a FOSS GPU should be able to copy it with low patent risk. Improving it would make a nice MS or PhD for academics.
"Yet would anyone want the 256Mb PI when there is the 512 Mb version?"
Based on what I've read, anyone trying to give them muscle should go for 512MB as the 256MB is giving many commenters hell. Just too little memory. However, people doing things that aren't memory intensively, esp embedded, might be fine with a fraction of Pi's resources. Remember that even 8-bit CPU's still sell around $10 billion a year at prices under a few bucks each. That's a lot of applications and units.
Meanwhile, anyone wanting an SGI-style experience today will need a combo of high-end CPU's, GPU's, and FPGA's. Pico Computing's desktops seem to fit the bill. I'm saving up for one. :)
I wondered about that too. In some of those benchmarks, the Pi was only 4x faster or so. But the 700 MHz Pi clock is about 12x faster than the 60 MHz SPARC. So in a same clock rate test (e.g. a theoretical 700 MHz SuperSparc), it seems like the SPARC should be faster.
I have an SMP SparcStation 20 in the closet (either 2- or 4-way, can't remember). I should dig it out and run those benchmarks myself.
If you can, I'd love to see those benchmarks not just on SunOS 4.1 or Solaris, but on something like NetBSD 7.0 using a recent GCC or clang. It might make for a good comparison since the hardware is close to what the benchmark was originally run on.
And if you're in the Bay Area and looking for a home for that system… I looked into getting one as I was writing, and while Weird Stuff and MemoryX have OK prices on CPU boards and memory, I was seeing prices of about $300 for a working SS20!
That's same range as eBay. Many of the old, RISC machines hold their value well. The SunBlade workstations still go for around $1000 if loaded. The AlphaServers are holding out well across the board:
A lot goes into processor performance besides clocks. Pipelines, caches, memory bandwith, timing of individual instructions... even the compiler version on same compiler. So, go ahead and give the junker a spin to see how it compares. Might surprise you.
I'd be willing to bet that even if you tilt the contest in the most favorable way toward the Sparc-20, the Pi would come out on top on a cost/performance basis.
Whoa whoa, we're talking performance not cost. Want to compare cost and performance? Use a Leon4 Quad-Core SOC on same process node so both are embedded. Otherwise, we're comparing RISC embedded on advanced node to SPARC servers on old nodes.
It sort of does that. I'd compare the early Acorn's to ARM's on Raspberry PI's. That would tell me how ARM has changed over 20 years. Optionally comparisons to older RISC and CISC architectures, esp i386.
Especially the Leon3 and Leon4 embedded ones. Throw in cost of processor license: up to $15 mil with royalties for ARM ISA vs $99 for SPARC ISA with implememtation a fraction of ARM's.
You have to license it to make the SOC. They only get those cheap in huge volumes. So, what people are actually saying is a huge-volume, low-margin ARM has better price-per-unit than a low-volume, premium-market alternative. No kidding lol. Even similar markets, the low licensing are why MIPS and PPC chips continue to sell at low prices even with lower volume. And SPARC has no such licensing if you do the core yourself given it's a mostly-open ISA.
So, the comparison is apples to oranges in many ways. It would have to be on same process node, same clock, feature power management, high volume, and embedded board. Who knows what that comparison would be. This one is meaningless because, outside Leon's, SPARC development focuses on servers with high performance for threaded and mainframe loads with inflated prices. Case in point:
I read these and I'm always "but, but, wait!" And I recognize being on the other side of an argument I was on when the first microcomputers came out, where people with "real" computers would tell me all the reasons my microcomputer wasn't as good as a real computer and I was telling them it was good enough for what I wanted it to do.
So for starters, a lot of people are trying to push around more, larger, pixels.
Some years ago I did some experiments with the OLPC-XO essentially analysing this exact problem. Modern software on a platform weaker then current Entry-Level.
Analysing the overhead in printing a terminal text to a screen, I found a process that output a great deal of text but >dev/null took 1.5 seconds and tried a couple of different terminals.
7 seconds using bitmapped fonts
30 seconds using xft
6 minutes 34 seconds using vte (Pango/Cairo)
That's a lot of overhead. You get unicode, scalable fonts and extras but it all comes with a cost. Most of that cost isn't intrinsic to the additional capabilities, but just an accumulation of minor inefficiencies because the result was ok on the machines we were using at the time.
Because the Rpi doesn't have much in actual working acceleration for X, it's all CPU driven for drawing, and display. That's one reason for hoping the VC4 stuff gets into the 4.5 kernel, then the KMS driver and Gallium3d should be workable on the PIs allowing for a much saner X experience.
I've never used x11 on the pi but it seems like a lot of the memory is allocated as disk cache in the default distributions. Plus if you use a crappy/huge DM/WM it will feel sluggish even on a fast computer.
Lack of memory seems to be the main issue I've hit with the Pi. Having the Graphics card use the system memory gives you the worst of both worlds with so little memory to begin with.
The problem is exacerbated on the 256MB models. IMHO those models have too little memory and are somewhat crippled right out of the box. Only the 512MB models are even somewhat reasonable.
Well X11 in general was memory bandwidth hungry, but you drop down to 720p (closer to the 1154 x 900 size) and you get a bit better. Also if you could get the Pi to run with a black and white frame buffer it might help.
I used to have a SPARCstation SLC, monitor and CPU in one. It was plenty fast. Apparently it was 20 MHz compared to the SPARCstation 20's 60 MHz. Now I have a Pi lying around doing nothing, but going to control a Nixie tube setup for some counters. Times change.
Pretty wild that this was top end in 1994. I wonder what the benchmark versus a 100MHz Pentium, also available in 1994 (I had one at the time) would look like.
You can probably see exactly that kind of benchmark in BYTE issues from 1994-1997 at archive.org, I think they did a couple benchmark roundups of PC UNIX back then.
Heheh yeah I'll look smart with my USB keyboard ;-)
On the other hand, for the time the sparcs were pretty advanced, high speed SCSI (whoohoo ;-)), they had the famous RS422 ports that were also on the macs and allowed (synchronous) 2Mb/s.
I had a Sun 3/80 back then in 1991 or so, given to me by a Sun rep when the SPARC came out; it was already fantastic hardware to have in my bedroom! I'm still hooked on the Sun hi-dpi console font (still present in linux)
I was more of an SGI guy, having cut my Unix teeth on Risc/OS on a MIPS Magnum pizza box, still got my O2, wish I still had my stack of Indy's .. but I can nevertheless understand the love for these old machines. What a pity those guys didn't compete for the pocket, eh?
The SS20 had serial and SCSI ports, numerous multimedia ports, video expansion from 8-bit 1152x900 to muti-monitor accelerated 3d 24-bit color, and hundreds of different SBus expansion cards. And from a processor perspective, you could (technically) start with a single 33Mhz processor and expand to 4 x 200MHz processors (from Ross) without dumping any investment.
Very few computers had as long a useful lifespan as the SS20, and if they weren't so power hungry they'd likely still be in production all sorts of places.
That said, the SS20 was from the before time when 10Mb ethernet was standard. There were 100Mb SBus add-on cards of various configs which worked very well, and I recall SX and LX fiber 1Gb cards that I never used. I had a lot of experience with OC-3 (155Mb) ATM on the SS-20 (nice setup), and there was an OC-12 card (622Mb) that pretty much crushed all the CPU you could throw at it just to move packets. Those were really intended for the 600-series servers, so not surprising.
Buried in the basement of a building on campus, with huge black and white CRTs (21" or bigger), and laser mice that required a mirror-polished mouse pad, a nice upper-class student took pity on me and introduced me to UNIX when I started bothering him to share his .xinitrc file with me (his prompt was decidedly cooler than mine, and back then, Pine was how we got email, so everyone got to use a terminal a little bit, on Mac, Windows, or UNIX. That was neat, because even the least-interested students were semi-comfortable sitting down at one of the UNIX machines).
Sorry, not a thing to do with the article, it just sparked a warm memory. They're why I use a Mac now - it's the prettiest UNIX (IMHO).
Every now and again I'll open Pine just for the nostalgia of it.