Hacker Newsnew | past | comments | ask | show | jobs | submit | evbogue's commentslogin

The signed databases can be decentralized, but the index is mostly controlled by Bluesky and most of the 3rd party apps depend on Bluesky API calls. These API calls are not currently applying these tougher filters that the Bluesky social-app applies to the feeds.

Does this article mention anywhere that Dan is a former employee of Bluesky and I just missed that disclosure?


wait what? where does he work now?


I was there as the assistant head of river kayak safety, and it was a historic moment. if you were watching, i was the kayaker wrestling the malfunctioning bouybot back into place under state street. why they couldn't use old fashioned bouys instead of ai bouys? beyond me.

i've also accidentially fallen in the river a couple of times over the past two years, and i will confirm that the water is safe and getting colder every day this time of year.


> i've also accidentially fallen in the river a couple of times over the past two years, and i will confirm that the water is safe

I believe for open water swimming, the definition of 'safety' they're aiming for includes checking the water for human faeces and bacteria like e.coli.

A one-in-250 chance of getting diarrhoea the next day is no problem for clumsy drunks, who'll just be glad to make it out - but for a health/fitness event it's undesirable.


Huh what is a bouybot? First time hearing about that.

https://i0.wp.com/www.chicagotribune.com/wp-content/uploads/...

In this image from the Chicago Tribune you can see two of the bots. one is orange, the other yellow. they should have been in a straight line since they were being used to guide the swimmers down the course.

i was told they position via gps and their gps just didn't work downtown.


GPS can be fiddly when you're in a pit or surrounded by lots of tall things that block RF or make the signals bounce around. Clearly the event needed a local GPS augmentation signal :)

Also known as the "kiddy pool" that SpaceX slaps cameras and a starlink dish in on top of to record landings in the middle of the ocean

I was going to say that I'm building apps in Deno which are outside the npm ecosystem -- the issue is these days that Deno supports npm deps now so you risk someone in your supply chain using npm and suddenly one is exposed again.

I just try to stick with Deno Standard Library since that is self-contained.


I immediately thought of the Oakland loft disaster in relation to this startup name -- but I'll admit I've been and on-and-off again loft occupant (not the ghostship loft, but other lofts) and East Bay/SF resident so that's probably skewing my view of this naming choice.


a colleague of mine recently insisted that bittorrent be included in web3 techs since it uses hashing. if we toss torrents under the title of web3 then it's been a huge success.

atproto, however, could learn a lot from torrents and all of the other protos since then. for example, their recent push to centralize bookmarks.


> for example, their recent push to centralize bookmarks.

It's worth noting that the current bookmarks implementation is "temporary" and will be moved to the PDS as soon as private data repos are supported on the PDS.

The approach they are taking (and are recommending to other devs in the space) is "develop your private data as if it was stored on the PDS but stored on the appview. then migrate to the PDS once private data launches". If you maintain the same lexicon/schema under the hood, you can just copy the data to migrate it.

Private data is unfortunately really hard to do right so this "start workshopping it and get ready for us to roll support out" approach seems like the best temporary solution.


> a colleague of mine recently insisted that bittorrent be included in web3 techs since it uses hashing

Related pet-peeve: Folks who say "there's still great promise in private blockchains." This is equivalent to saying that self-balancing Segway-style devices can still become the dominant mode of transportation if we juuuuust make them bigger, enclosed, and add another pair of wheels so that they don't have to self-balance anymore.


Yea, the bookmarks thing is interesting, and wearing my conspiracy theory hat (it's not the red one), I wonder if it's not an experiment to see what the people will tolerate or how they will respond. It could equally be attributed to an early win for the new product manager, finally delivering a long requested feature in a POC form factor, while private data gets figured out.

I'm highly involved in the private data work and we'll have a better bookmarks in the long run

I like that atprotocol sits in the middle of web 2 & 3, ideas from both without being beholden to either


I'm pretty sure the private data model is going to be clearly defined by EOY, between the community group work and bluesky internal work. We also made sure that bookmarks will migrate cleanly to protocol private data when it lands, so no need for the tinfoil hat. Your other explanation is pretty close to the mark.

EDIT: also fwiw, I got my start in 2012 working on ssb and then in 2013 on dat, both of which were conceptualized as bittorrent variants. Atproto has a pretty clear lineage from that p2p work. You just can't do large scale social computing on user devices.


yes, and we are using computers in our pockets now that are way more powerful than the ones we had when we were working on ssb. local indexes are possible, you and dom showed that back then.

ssb had private bookmarks, let's dig into how those were enabled


agreed! we should continue this discussion over on atproto


one thing I would like in atproto is some form of smart contracts

(transactional semantics over accounts and xrpc calls)


I'd also wonder where this shared encryption key for message "backups" is stored. If it's available on all of my devices, I suspect it would be available on other devices as well?


The article says it is generated on your device and they don't have a copy. Sounds like a public-private keypair where you are responsible for managing the private key.


got it. doesn't Signal already have on-device keys with a session ratchet? why not back those keys up so one can decrypt the entire history on any device?


afaik the key material is regenerated for every message. new keys can be derived for every subsequent message you send, but only until you get a reply, then a new key exchange takes place. And the key material for message m1 cannot derive keys for the messages that came before m1. If the old key material gets properly deleted then there is only a very small window of compromise. backing up those keys would defeat the purpose of the ratchet.


yes, agreed, and isn't this feature re-encrypting all of the material without a ratchet or asymmetrical boxing?


Yes, it undoes all of the security features of Signal's encryption protocol.


I mean it says so right in the blog post

At the core of secure backups is a 64-character recovery key that is generated on your device. This key is yours and yours alone; it is never shared with Signal’s servers. Your recovery key is the only way to “unlock” your backup when you need to restore access to your messages. Losing it means losing access to your backup permanently, and Signal cannot help you recover it. You can generate a new key if you choose. We recommend storing this key securely (writing it down in a notebook or a secure password manager, for example).


i missed that paragraph, thanks for pointing it out. i wonder what algorithm they're using here, and if we could use third party tooling to decrypt these messages on a local computer? it might be a pathway to some cool experimental third-party Signal apps


I use Trystero as one of the transfer methods on wiredove. it's super cool. it doesn't always work because punching thru NAT is a pain, but when it does work it's awesome. Trystero is also cool if you want to hook up a webcam or a video meeting with the minimal amount of code.


but Bluesky runs the API that all of these tools rely on


No it does not. That is the trick.

The client/frontend calls out to a set of XRPC endpoints on the user's PDS. The user can use any PDS they want but yes most users are on the bluesky "mushroom" PDSes. There are plenty of open enrollment PDS nowadays if you care to look around and want to switch away.

The appview have no ability to interact with the user directly so if you use any non bluesky PDS and non-bluesky client/frontend (both relatively trivial to do), then the appview is basically a (near) stateless view of the network which you can substitute with any appview you want (the client can choose the appview to proxy to with an http header) without ever touching bluesky the company.

And of course there are multiple appview hosts. As well as relay hosts (which the appviews depend on but not the user/client).

There are plenty of ways to go about using bluesky without yourself or the services you use ever touching bluesky the company's infrastructure.


so basically you can run a cache for them and they have the final say on all accounts/ids because nobody will see any federated content anyway.

you progress the grand parent comment point, with a lot more words.


No? I'm not sure how you got that out of anything I said.


Where does the firehose stream originate? From individual PDSes, or from the Bluesky relay that aggregates their repo events?


How do I do this then?


Everything but the relay (but you'd realistically only need the PDS): https://alice.bsky.sh/post/3laega7icmi2q

The relay: https://whtwnd.com/bnewbold.net/3lo7a2a4qxg2l


Edit: I mistook the bsky.sh domain, my bad. Can't get strike through to work for the life of me. I give up.

~~Bluesky blocked in Mississippi, try to work around it, only for the resource that tells you how to do this to be hosted on Bluesky, which is blocked. That's... suboptimal~~.

I can't help but feel like Bluesky is just three corporations in a trenchcoat pretending to be an open federated ecosystem.


Bluesky is just one corporation in a trenchcoat.


I'm so confused. Isn't this the same administration that launched project warpspeed to develop and spread these vaccines back in 2020?


Trump never knew what to do because he was aware the manipulable antivax were in his base but also he thought the vaccine might be released early enough to help win him the election, so he didn't want to come out against it either.


The first trump administration still had a mostly intact bureaucratic apparatus with a cadre of career technocrats in positions of authority and influence. The second is oriented very differently.

Also remember that the politicization of masking took a while to spin up, and didn't fuse with and evolve into the current antivax movement until later, when the vaccine was actually in sight. It wasn't a bipartisan wonderland or anything but early covid days was an extremely different political environment from now.


Why is this downvoted? Objectively true.


Agree


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: