Am I right in saying that the complaints with btrfs in CoreOS are specifically around its use in conjunction with Docker?
(Interested as I'm thinking about building a homebrew NAS/general purpose server w/ btrfs, there's a lot of outdated info on btrfs but I was getting the impresssion that it's now a pretty stable and useable filesystem)
I can say that ZFS has worked great for me on BSD-based home servers. Haven't used ZFS with Linux yet, though it's possible to do so, it's just unpopular partly for licensing reasons. I suspect what type of RAID you do may have greater consequences than what file system you pick, particularly if your distro is already designed for serving files on the file system you choose. Oh and working out all the AFP/Samba bits are fun, because there's always something that surprises you.
I've got severely burned by ZFS on linux running in AWS. Heavy NFS load (ZFS NFS, not linux kernel NFS) caused a kernel panic, pretty reproducibly. This was on Ubuntu 12.04 with the offical ZoL PPA sources, so YMMV.
For managing volumes, ZFS on Linux works great. But for managing NFS, I'd definitely go with a separate NFS implementation if I wanted to heavily use NFS. The primarily developers/users of the ZFS on Linux port are mainly using it for highly-available single-machine volume management, and exporting volumes to gluster or other clustering filesystems for use in massive HPC clusters like those over at Lawrence Livermore National Labs.
ZFS is meant for managing local drives, and to make it performant you need to configure SSD partitions to act as an L2 Arc cache. The online documentation is pretty good, so after going through the docs it should be pretty clear how to set ZFS up properly for your use cases.
But it sounds like you're using EBS drives? If so, not sure why you'd want to use ZFS. Last I checked, ext2 or xfs was the way to go with EBS drives on AWS. AWS has so much stuff going on in the background to ensure reliability/availability of EBS volumes that adding another layer isn't worth it IMO and I've seen similar kernel panics running other complicated volume managers on top of EBS.
One thing you might want to check is if you're setting a limit in the driver for the amount of memory ZFS uses for caching. By default it'll use a LOT of memory, so I usually just set to a max of 2GB and don't see any issues.
(Interested as I'm thinking about building a homebrew NAS/general purpose server w/ btrfs, there's a lot of outdated info on btrfs but I was getting the impresssion that it's now a pretty stable and useable filesystem)