I'm in the market for a new NAS, but I don't think this would have enough oomph to be very useful.
I understand that it's by design, and that I'm probably not the target demographic, as others have pointed out, the 512MB of RAM, lackluster CPU and 2.5" drives are all pretty disappointing.
This is estimated to cost $168. For $250, you can get an HPE ProLiant MicroServer Gen8 Entry, which comes with 4GB of ECC RAM, a 2.3Ghz dual core x86-64 CPU, dual gigabit that supports line-speed link aggregation, and takes 4 3.5" HDDs. It's upgradable to an i3 or Xeon CPU, 16GB of RAM, and can be modded to take another 2 or 3 2.5" drives.
I appreciate that it's fully open, which is definitely a massive appeal, but if I need to run another machine (with blobs) next to it to actually operate on the data, what's the point?
I think it depends on your needs for performance. Most personal applications (streaming music, video or storing backups) aren't going to be affected much from having a 2 gigabit link.
At 10 watts, there is a good opportunity for some power savings. I have an Atom-based HP MicroServer and it lives in the area of 30-40 watts idle with 5 3.5" drives. Going down to 10 watts would mean saving around $40/year for me, not bad, but not enough to worry about.
The biggest failure with this in my mind not supporting 3.5" drives. Getting decent capacity in the 2.5" form factor is not really possible, and if using SSDs, in that case you'd want a more performant machine like you describe. Then again if using SSDs, why not just use an even smaller form factor.
I bought my current NAS by being fed up with my Apple Time Capsule and sorting by cheapest price and going down the list until I found something suitable. All I needed was storage attached to the network with decent throughput and OS X Time Machine support (=AFP). I ended up with a Synology DS216j.
The GnuBee matches the Synology in price and exceeds it in functionality (aside from the 2.5" drives - seems like an extremely odd choice).
I think the problem with this product is you have two categories of NAS buyers - the ones who want plug and play storage on their network, and the ones who want the best performance and all kinds of esoteric features.
The former will go for a commercial solution like Synology/QNAP, the latter will go for something like what you suggest or a home-made solution.
This product targets the former in specifications, but the latter are the ones who care (or probably are even aware) about open source. Add to that that open source software usually has dreadful usability...
Nice. I've been looking for a low-power "bring-your-own Linux" board for a while. Classic DIY NAS builds consume outrageous amounts of power (35W to >100W) compared to commercial NAS (~10W), and existing single board computers lack GigE and/or >=2 SATA ports.
I just wish they'd have gone for a standard uATX form factor for compatibility with a professional case (and 3.5" disks). It's ironical that the "open hardware" project has to come up with a proprietary form factor :/
I also wish they had ECC RAM and enough of it for ZFS, but I know that is technically impossible with the price and power constraints.
> Classic DIY NAS builds consume outrageous amounts of power (35W to >100W) compared to commercial NAS (~10W)
??
Commercial NAS (like non-RM Synology & friends) are not magic. They save a bit of power due to higher system integration, yes, but most power is saved simply by using low-end hardware.
Most x86 DIY NAS use either desktop hardware or low-end server hardware (usually same thing, different labels) -- most of these have far more compute power than the small Atom C2000 or similar found in a x86 NAS.
If you want something similar to a commercial NAS, then use low end Mini-ITX boards (<10 W TDP, usually four SATA ports and perhaps one PCIe) or a PCEngine.
ARM boards on the other hand are all rather weak in all regards: poor I/O, little memory, weak CPU cores (even if there are four of them), quite some of these boards also have stability issues. None have ECC.
Who said anything about magic - a) they don't use x86, b) they use boards with few unnecessary peripherals, and c) their vendors are perfectly fine with proprietary drivers (-> can't easily reuse their HW).
Good luck doing a build with an overall 10W consumption target with x86 and what is essentially desktop HW, especially if it should be affordable (relative to a ~250€ commercial solution) and maintainable (no, no custom kernels with weird patchsets that rot away in 6 months, and having to recompile the world).
We have used PCEngine boards. They're fine as network appliances. But even the latest ones only have 1 SATA port (fine, 2 if you count the mSATA port).
I ran an E-350 (20 W TDP) based board for many years and without the idling drives (each about 0.5 W; only system SSD connected) it consumed about 7 W idle. That's well within your 10 W figure - and boards with lower TDPs are available as well (though a lower TDP does not generally imply lower idle power).
The replacement system (Sandy Bridge i3) pulls about 25 W in the same scenario, which is fine by me.
---
Now you're saying that 10 W with desktop hardware is not possible, and yeah, that'll be difficult. But it's a false dichotomy: the "10 W commercial NAS" doesn't have anywhere near that capable hardware. You can absolutely achieve these power consumption levels with similarly spec'd hardware.
I have a Netgear ReadyNAS NV+ rev. 1 (got it for free from work) with a 200mhz SPARC processor. Revision 2 of the "same" product is based on ARM. Modern ReadyNAS desktop lineup though? Atoms and Pentiums!
The big question: what's the actual throughput? 2x1 GBit/s should be 250MB/s (or, after SMB overhead, 200 MB/s) - but even on a beefy QNAP TS-1635 I could get out only 80MB/s despite having a RAID5 of 10x WD RED disks.
This tiny thing will not go very far in terms of bandwidth. If you want bandwidth you will need a HW RAID controller.
You realize that (due to load-balancing algorithms) you probably won't see that (theoretical max) 250 MBps to a single device, yes? From one device to the QNAP you'll still see at most 1 Gbps (usually -- there are some exceptions).
Of course I do, in my case the QNAP is connected to the switch using port bonding. Each port gets ~40 MB/s, with multiple clients or with a single client.
Agreed. Only accomodating 2.5" drives makes this significantly less useful as a NAS. I get that their target comparison is the ds416slim, but imo the current sweet target would have been the ds416play or even the ds916+. :\
They say that they plan to do a 3.5" version nebulously in the future, but, my god, that's such a backwards decision. There is much more market for the larger model.
Great idea. One of the points made in the post is "What kind of security is protecting your data? How can you audit that security?"
Yet, many people would say that they rather trust their data with Google/Facebook than self-host or use some random provider or host it at home. Is this a common shared sentiment? Just wondering if this opinion is shared among a more technical HN crowd (most of them use gmail even on HN).
This doesn't seem powerful enough for media centers that transcode-as-you-watch like plex/kodi, but would work fantastically well with the ahead-of-time-transcoding media server I created: splinter.com.au/gondola ha - shameless plug
Edit: No, it wouldn't run Gondola - not enough RAM, unfortunately.
I like the idea of an open source NAS but I don't think this is the way to go. I would like to see a low power linux boards with two sata connections. This way you can have dual disks in a mirror situation. With multiple disks, you could easily add a new linux board as well. Something like the BananaPi comes close, although I doubt their QA deppartment.
Interesting project for low cost NAS. However, why is it open to air? It seems to me that it would gather quite a bit of dust and be susceptible to damage. Is this required for fanless operation?
As far as fanless micro computers go, they usually tend to go with an enclosed design with the unit itself acting as a giant heat sink. However, with this many disks, maybe it needs an open design? Possibly only for cost.
They say that fan cooled enclosures collect more dust. I can confirm. I have a ReadyNAS NV+ with a 92mm fan and it accumulates tons of dust extremely quickly.
I understand that it's by design, and that I'm probably not the target demographic, as others have pointed out, the 512MB of RAM, lackluster CPU and 2.5" drives are all pretty disappointing.
This is estimated to cost $168. For $250, you can get an HPE ProLiant MicroServer Gen8 Entry, which comes with 4GB of ECC RAM, a 2.3Ghz dual core x86-64 CPU, dual gigabit that supports line-speed link aggregation, and takes 4 3.5" HDDs. It's upgradable to an i3 or Xeon CPU, 16GB of RAM, and can be modded to take another 2 or 3 2.5" drives.
I appreciate that it's fully open, which is definitely a massive appeal, but if I need to run another machine (with blobs) next to it to actually operate on the data, what's the point?