Being an old Slashdot user, the first incredibly dumb thought of mine was "Imagine a beowulf cluster of these!" With a pic of Natalie Portman for the watch face, of course.
But, one fun thing I could imagine doing is using it as an incredibly portable PirateBox. Or any other use of a file server hiding in plain sight.
Man, I remember when those jokes were old, 15 years ago.
Speaking of Beowulf, has there ever been an evolution of the concept? The closest I've seen since has been QNX's QNet, which allows transparent management and communication between process on nodes of the cluster. I suppose Hadoop or even Kubernetes can be seen as the continuation of the concept?
All of these machines are big clusters running Linux. Mostly on Intel CPUs.
But on the other hand, the idea of using commodity hardware is kind of a thing of the past. It's mostly Xeon CPUs, not desktop processors. And it's specialized network hardware. And more and more you see dedicated compute hardware like Intel Phi and Nvidia Tesla cards.
Yeah, it's pretty intense to see these clusters in person. In our data centers, we have 40G optical interlinks per rack overhead, 100G spidering across the racks to different rooms and the main network room.
And thinking of he main network room, with the amount of brocades in there, it's probably more expensive than the main enterprise pod just in sheer super-expensive network stuffs.
We're also behind the times in lots of our management. 80% of our servers are bare metal, with limited automation. But we also do "NOC in a box"... many of our use cases wouldn't cleanly work right using tech like docker and kubernetes.
That's a narrow definition of "commodity" -- the special networks cost less than the same speed of Ethernet, and Intel server chips (non-phi) aren't that different from desktop CPUs.
If you look through the archives of the beowulf mailing list, occasionally someone makes the argument you're making, and few people agree with it.
There is no 'same speed of Ethernet' for infiniband or omnipath or aries, etc. There is more to these networks than throughput, and the switches approach a million dollars apiece.
The rest of the non-phi/non-tesla hardware is pretty much off the shelf, but the interconnect is one of the two distinguishing features of a supercomputing-class cluster; the other is high-performance shared storage (which of course requires the interconnect to function).
It's a shame feel like I need to. There's no world where high-speed interconnects are as cheap as ethernet, nor is there a world where it is appropriate to replace them with ethernet. Congratulations on your successes but they're not really relevant to the accuracy of your post.
> Speaking of Beowulf, has there ever been an evolution of the concept?
I don't know if you'd call it an "evolution of the concept", but there are people who've made "low cost" clusters of Raspberry Pi boards (anywhere from four, to several hundred), not so much for practical purposes, but more for learning how to set up, use, and maintain such a system, without needing the space or power requirements a real system would need.
But, one fun thing I could imagine doing is using it as an incredibly portable PirateBox. Or any other use of a file server hiding in plain sight.