A few questions come to mind after reading the docs (minus the currently 404ing ones):
(1) is this a zonal or regional product? ie does it replicate data across zones in a region?
(2) roughly what latencies should I expect to see? eg for the following: a 1KG, 1MB, 1GB read and write. Any info here would be helpful.
(3) does it have close-to-open consistency? or something weaker / stronger?
(4) any plans to add gcp pub/sub integration somehow? would be great to be able to subscribe to changes like with other gcp storage products.
(5) any plans to move from NFSv3 to NFSv4?
(6) backups available or planned as a feature? eg to GCS
(7) can you share anything about how it is implemented?
Further minor versions add other things that could be useful for cloud usage, e.g. in NFSv4.1 (https://tools.ietf.org/html/rfc5661 ) parallel data access, sessions, improved delegations, and in NFSv4.2 (https://tools.ietf.org/html/rfc7862 ) server-side clone/copy, sparse file support.
I'm pleasantly surprised by mere NFSv3. Google doesn't use other people's standards by choice. They probably have an internal file protocol that the Greys gave them to hand over to humanity at a future date.
I work for Google and am the product manager for Cloud Filestore. I would have responded sooner, but I was busy with with announcement related events :)
If you (or anyone reading this) want to discuss any aspects of Filestore, My email is my Hacker news login with at google.com appended.
(1) Filestore is a zonal product and provides high availability (HA) within the zone. We are considering adding regional regional HA, but it's not entirely clear what the use case is, as there is a cost & performance tradeoff. I'd be happy to chat with you in more depth to understand what you would like to see here.
(2) It's hard to be precise about what latencies _you_ will see, as the set of benchmarks and workloads run against NFS is so varied. Anything I say here, will be true for some workload, but undoubtably is bound to find someone who can find a workload where it's not true :). So, TL;DR: YMMV, best to test your workload when the beta launches soon (signup to be notified when it launches in a few weeks at https://goo.gl/forms/Hx6XkobcwNo5DoA33)
(3) We support close-to-open consistency, but it's really up to the client. See this Linux NFS FAQ for details: http://nfs.sourceforge.net/#faq_a8. TL;DR: If you're running a Linux version ≥ 2.4.20, and haven't mounted with the 'nocto' attribute, then yes, you'll see CTO.
(4) We don't have any plans for pub/sub integration on the roadmap, but I'd love to talk to you about the use case (see info about my email addr above).
(5) Yes, we have NFSv4 support on the roadmap. We launched with NFSv3, because it's still widely used, and in many cases customers won't see any appreciable performance delta from NFSv4. That said, we agree that it is very important, and NFSv4 can often help wih some metadata heavy workloads, and has a more extensive authentication and authorization model which some workloads require. Ultimately we made a time-to-market tradeoff.
(6) For backups, we support any of the standard commercial backup software that's certified against GCP and can backup NFSv3 shares. We don't have a native backup solution planned, but we do have snapshots on the roadmap, which in some cases are sufficient.
(7) As to implementation, sorry no, I cannot.
And to answer a few more questions from the nested comments so this is all in one place:
* Snapshots are on the near term roadmap, and are very high priority for us to get supported.
* SMB, extended attribute, and quota support are all on the roadmap, and like NFSv4 are high priority
Unfortunately I can't be more precise about when to expect these features.
Thanks