There exists winden.app which is a magic wormhole webapp. They use their own mailbox and relay so you need to use the right options in the wormhole CLI.
When operations complete in 200ns instead of blocking for microseconds/milliseconds on fsync, you avoid thread pool exhaustion and connection queueing. Each sync operation blocks that thread until disk confirms - tying up memory, connection slots, and causing tail latency spikes.
With FeOxDB's write-behind approach:
- Operations return immediately, threads stay available
- Background workers batch writes, amortizing sync costs across many operations
- Same hardware can handle 100x more concurrent requests
- Lower cloud bills from needing fewer instances
For desktop apps, this means your KV store doesn't tie up threads that the UI needs. For servers, it means handling more users without scaling up.
The durability tradeoff makes sense when you realize most KV workloads are derived data that can be rebuilt. Why block threads and exhaust IOPS for fsync-level durability on data that doesn't need it?
Is there some high-level overview of this "cascade of Ribbon filters" data structure? I understand bloom filters, but couldn't get any intuition for this one from FB's blog post.
What does the verifiable program do though? With a VPN, what I'm concerned about is my traffic not being sniffed and analyzed. This code seem to have something to do with keys but it's not clear how that helps...?
This is the server-side part of things. It receives encrypted traffic from your (and other customers) device, and routes it to the Internet.
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
> This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you
What would prevent you (or someone who has gained access to your infrastructure) from routing each connection to a unique instance of the server software and tracking what traffic goes in/out of each instance?
I have not inspected whether the procedure suggested for verifying the enclave contents is correct. It's irrelevant if you need to prove that the decrypted traffic, while still being associated with your identity, goes ONLY into the enclave and is not sent to, let's say, KGB via a separate channel.
(First off, duskwuff's attack is pretty epic. I do feel like there might be a way to ensure there is only exactly one giant server--not that that would scale well--but, it also sounds like you didn't deal with it ;P. The rest of my comment is going to assume that you only have a single instance.)
A packet goes in to your server and a packet goes out of your server: the code managing the enclave can just track this (and someone not even on the same server can figure this out almost perfectly just by timing analysis). What are you, thereby, actually mixing up in the middle?
You can add some kind of probably-small (as otherwise TCP will start to collapse) delay, but that doesn't really help as people are sending a lot of packets from their one source to the same destination, so the delay you add is going to be over some distribution that I can statistics out.
You can add a ton of cover traffic to the server, but each interesting output packet is still going to be able to be correlated with one input packet, and the extra input packets aren't really going to change that. I'd want to see lots of statistics showing you actually obfuscated something real.
The only thing you can trivially do is prove that you don't know which valid paying user is sending you the packets (which is also something that one could think might be of value even if you did have a separate copy of the server running for every user that connected, as it hides something from you)...
...but, SGX is, frankly, a dumb way to do that, as we have ways to do that that are actually cryptographically secure -- aka, blinded tokens (the mechanism used in Privacy Pass for IP reputation and Brave for its ad rewards) -- instead of relying on SGX (which not only is, at best, something we have to trust Intel on, but something which is routinely broken).
That's already a domain name and a more complicated setup without a public static IP in home environments, and in corporate environments now you're dealing with a whole process etc. that might be easier to get through by.. paying out for github LFS.
I think it is a much bigger barrier than ssh and have seen it be one on short timeline projects where it's getting set up for the first time and they just end up paying github crazy per GB costs, or rat nests of tunnels vpn configurations for different repos to keep remote access with encryption with a whole lot more trouble than just an ssh path.