Think of it like an EC2 instance with ephemeral storage. You need an external storage service provided by the host (like EBS and S3 are for EC2), that the instances either talk to over the network (like S3), or which the host mounts into the container (like EBS.) Both are possible options with the current Docker runtime.
Yes. And we all know how well EBS works for consistent database performance. That's why I said "... would rule out running databases in containers ..."
I don't think so, but bind-mounts makes that irrelevant for most uses. If you need bare-metal device access, then you have very specialised needs that apply to a very tiny fringe set of users (as an example, we can reach 1GB/s reads from our SSD RAID arrays on some of our containerised database servers without resorting to raw device access).
I don't know if they meant GB or Gb, but for the record, I didn't see any difference in disk performance between native and containerized apps. In that case, that was pulling ~900 MB/s seq reads from a RAID10 disk array of 8 old 7k HDD. This is not surprising, as the code path for block I/O is exactly the same for native and containerized processes.
No, I don't think so. This is prevented by kernel namespaces and default privileges of LXC in Docker. That doesn't mean you can't bind mount a directory from your host into your container though; there's also the concept of volumes.