That depends on what you would qualify as successful :) It’s early days but AlgoraTV is seeing a lot of success in my opinion. I am generally happy to support folks in however way I can.
Tigris CEO and co-founder here. Unless it is in tens of PB range we don’t count it as extraordinary. And even then the goal is not to tax for bandwidth.
I am curious why this sounds dangerous. Data is still getting persisted to storage. It’s just a different architecture where compute and storage are not colocated on the same machine.
There are valid reasons for extending and redoing some parts of the API. I will give you one example. Suppose you want to extend list objects to support ordering by last modified or you want to support filtering of objects by user metadata. Right now doing this is quite clunky via headers.
S3 uses HTTP protocol for communication between client and server. The `s3://` paths make it seem like it’s a protocol but that’s just a way to represent the path on the s3 client to differentiate it from a local file path.
Supporting an existing API provides interoperability which is beneficial for the users. So that way if there is a better storage service it’s easier to adopt it. However, the S3 API compatibility can be a hindrance when you want to innovate and provide additional features and functionality. In our case, providing additional features [1] [2] while continuing to be S3 API compatible has forced us to rely on custom headers.
Yes we are based on FoundationDB. We pivoted from OLTP database to a lower level product as we found that it makes more sense to innovate at the storage infra layer. There is lot of innovation already happening at the higher level database layer.
And we don’t charge for egress to internet.
If you are looking for an active-active multi-region read-anywhere write-anywhere object storage service then you would choose Tigris over R2 :)