Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Being a teen in the 90s, sneakernet was sometimes even more fascinating and exciting to me than internet connectivity. It had an air of secrecy to it, little bundles of precious data carried between a select group of people on various, often cumbersome and/or expensive storage media. When everything is connected by a bunch of wires, it’s just too easy.

Sometimes I wish for a return a that feeling of preciousness, instead of incomprehensible amounts of data carelessly shoved down wider and wider tubes.

For my own sneakernet-like uses, I turn to git-annex. See use case “The Nomad” here: https://git-annex.branchable.com/



I find it funny, how sending files securely still isn't a solved problem.

Record your neice blowing off the candles at her birthday party, and the next day, her mom asks you for the original (high quality) video file, so she can create a party-video... the file is shot on a phone in 4K and uses 300MB of space.

Mail? Nope, too big. Chat platforms? Too big. Cloud upload? You have to share a link, and some services (ahem, skype) actually open those links when you send them... who knows, they might even save the files there. Also, do you really wan't personal videos in the cloud? HTTP/(S)FTP servers too much of a pain to set up for a one time need. We atleast had DCC on irc, now not even that.

So a USB flash drive it is... and a car ride.


The problem is receiving files, not sending files. You can mail someone an SD card and they will receive it at their house's mailbox. But if their mailbox fills up, the letter will be returned to you. And there is a maximum size for mail that fits in a mailbox. Similarly, people need a digital mailbox you can send files to, but they need to extract their mail from it and empty it out or it will fill up.

We actually made a great solution for this in the 90s: mailservs, inspired by Usenet. We made AOL bots that would take a file, chunk it into 1.5MB sections, and send each via email attachment. Then a client side program would download all the attachments and reassemble them. We downloaded entire movies this way. After download, the mailbox was emptied; if it wasn't, the next transfer would fail.

So people just need mailserv programs to send files, and download their mail regularly.


For a while now I think we have been lacking an enabling shareable storage and auth abstraction to build cloud apps on. From this standpoint email is a little bit of shared storage: owner read, public write (with email specific secure acceptance algos), group chat: account read, account write; note app: owner read, write.


I'm sure you can come up with abstractions, but without first doing a whole system design (FRs & NFRs, mapping dependencies, customer requirements, etc) you'll have to scrap those abstractions when they don't match up with how the system ends up needing to operate. So I would recommend not even thinking about technical abstractions until you have a very large multi-layered system visualization.

For example: what is storage? A router stores packets temporarily; is that storage? A DNS resolver caches records; is that storage? A browser stores cookies; is that storage? A chat or email server & client may both store messages in different states for different purposes. What storage should be shared and shouldn't be, in what specific circumstances? Who is allowed to read and write to which thing at which time in which circumstance? All of that will affect your abstraction.


These are fair suggestions for the low level detail of my comment. I think the core functionality is providing a web friendly storage access api providing for user control and syncing between multiple nodes. It's been done before but not recently with more recent and lighter protocols. Apps like the classic sample todo or note taking apps frequently shown on HN should really just be able to connect to a user owned storage - get authorized for a stoage allocation either fronting local storage or easy to spin up a node on a cloud hosted vm.

It's sort of a shame that so much of the new web3 storage work intrinsically links storage to a blockchain, instead providing a blockchain integration as one of many authentication/provisioning models.


> Apps like the classic sample todo or note taking apps frequently shown on HN should really just be able to connect to a user owned storage - get authorized for a stoage allocation either fronting local storage or easy to spin up a node on a cloud hosted vm.

Let's think of the simplest possible implementation of that.

A todo app wants to write to 'your storage'. What are your options for storage? If it's some thing like Google Drive, that is a proprietary interface, so right off the bat, you are now implementing vendor-specific things, so an abstraction is not worth much. You could make some kind of "JavaScript Framework For Storage", aka a js library that has 50 different proprietary implementations, but that would only work for javascript apps. Which, if you only code in JavaScript, is fine, but if you want to support some other application written in another language, now somebody has to maintain those 50 different proprietary implementations in another language too. That's just not sustainable.

If instead you want to create one standard web API to access storage through, even if you could define exactly what it should do, you now need to get Google to implement that one standard web API. But why would they? They already have their own Google Drive API which works perfectly well for their own purposes. You would have to show them some significant business advantage to throwing away all the money and code they've sunk into their own API (not to mention that all their customers and their apps have sunk into it) and build and adopt this new API. (That assumes you could even get them to agree to whatever standard API you created, as they may want a half dozen extra features that have nothing to do with storage)

By modeling the whole system, you can quickly see all the weird quirky problems that you will run into trying to make this API and get it adopted. It's not impossible, but it's a much larger problem space than you imagine at first.


This sounds like Tim Berners-Lee's new project. https://solidproject.org/


Maybe? Skimming the protocol there aspects of the spec I may not be understanding that superficially seem at odds with a storage container with fairly simple interfaces. Though the security aspects seem at about at the complexity they have to be. I'm probably missing the full aims of the protocol though.


In my opinion TBL has lost all credibility after the encrypted media extensions debacle. This pays lip service to privacy but given he has put corporate interests ahead of individuals before I would not trust it. I suspect it is some kind of poisoned chalice.


YoU'vE g0t mE nOstAlgiC for AOL. I wrote my own chat file servers and apps for downloading(which amounted to opening a span of emails and adding to download manager.) Chat servers I made/used forwarded the uploaded emails, first made with a tool like you're talking about if not included, but then you'd typically unpack it yourself after dl. AOL has a bad rep with tech people, but it was sure nice them hosting all this rather than what you might have elsewhere and a fun platform to play on.

Tasks like this or even punters(html heading tag denial of service through instant message lol) were what got me interested in programming in middle school, no training or education, just hacking around trying to figure things out in a pirated VB3.0. The community was great with private chatrooms full of users testing and helping others, trying out tools. People shared libraries to learn from, use, adapt. Anyone remember the popular old and terribly named genocide.bas? I'd like to get my hands on some of these old proggies and code if they're still out there.. Or even just the cartoon sheep program which would run around the desktop interacting with window borders.


A passworded zip file would probably be my choice, easy enough to open at the other end even for technologically illiterate people, and doesn't take anything fancy to make.


The video is already compressed, a ZIP will not save any significant amount of space in that case.


Pretty sure they're zipping for the encryption, not the compression.

edit: the way I'd solve the rest is just by using a torrent magnet link. The problem with that is that you might have to teach someone how to open a port for UDP traffic, but after they've done it, sending files of any size to them becomes trivial.


If P2P wasn't so awful we could just download from computer to computer. But alas, NAT and IPv4 make that difficult


I also opt for P2P whenever I can, but it does have drawbacks as well, besides NAT. First, both have to be online at the same time. Second, seems many connections are still not symmetric, so sending files are limited by the upload speed of the sender, and together with the first problem, makes the downloaders experience suffer because of the uploader.

With that said, better infrastructure (IPv6 + better connections) can make P2P very feasible in the future, hopefully. Or software that defaults to local connections if it's possible (so if we can find the device via a private IP, use that connection instead).


The difficulty created by NAT and IPv4 is drastically overstated. Even if users' devices generally had public IPs, users still wouldn't want to install server software nor leave their computer on. There is no money to be made pushing solutions that cut out the middlemen, so no advertising continually telling people "Try FooTransfer", and thus no network effects. Instead, one user goes "I can send you this using FooTransfer" and the second user goes "that sounds scary and hard".

And doesn't Dropbox work for the given example? That's the mass-market productized/paid/marketed/surveilled solution.


Is P2P awful? I've been using Resilio Sync (neé Bittorrent Sync) for a decade now; if there is a reliable piece of software in my stack, this is it! Sharing files is a matter of passing someone a code. It works so well I really don't think about this as a problem.



There seems to be a never-ending stream of these services, each very similar to the last, but none seem to survive more than a few months, a few years at the outside.

I think GP's point is that this churn indicates that it's not a "solved problem". It's clearly solvable, but whoever runs these services doesn't find a revenue stream. It should probably be provided by your ISP just like email and usenet once were, but the cultural expectation isn't there. Maybe they get shut down because people use them for bad things? I don't know.


This has been around for years. There's 'churn' because there's almost nothing to hosting this service. None of the data is sent or goes through a cloud service, it's all P2P connections. There only needs to exist a central server for brokering initial connections and handshake between clients. Anyone can setup and run a service like this with just a few dollar domain name and a couple bucks a month of VPS hosting.

I think you're really just wanting some big name or trustworthy player like Dropbox to come out with a similar service. It's not going to happen--like I said by design wormhole and similar tech doesn't send or store data on any central service. There's no value for some big company to give people this service, they're literaly just burning bandwidth to shuttle invisible (to them) bits and data. They can't extract anything like advertising targeting, revenue, etc. from the service. It will always exist as some weird self-hosted thing.


I use a personal NAS which solves that problem nicely. The other option that’s much closer to sneaker net would be use of a direct file transfer protocol like Airdrop. There’s also stuff like file.pizza, but that’s not always reliable IME.


NAS with a DynDNS you have to pay for because ipv4 addresses are scarce I take it? I remember when I could fire up a http.server in python to host some local file, and it would be accessible across the world for as long as I wanted it


I point my own custom domain at it, so that part isn’t free. I have my NAS configured to automatically update the DNS record when/if it changes.


The difficulty in sending files isn't a technical problem, it's a political problem. In particular any efficient file transfer system will immediately be used by people to make copies of copywritten materials. This will ultimately get them in trouble with the law and shut down. So we can only have file transfer systems with obnoxious limitations.


Google drive or Dropbox link shared to their email address isn’t secure enough for a video of your nieces birthday party??


What is your take on this: encrypt your large file and send it over via some common transfer service?


Isn't this the exact use cases of all WebRTC based file sharing tools?


Forever memories of high school around 1990, passing around backpacks fulls of floppies in front of our uncomprehending classmates...

And of course a lot of time spent copying - that second floppy drive made my life so much easier !


It’s always the second-last floppy in a series of dozens that is corrupted.


I can almost hear the sound of the FDD trying - and failing - to read that one bad sector.


Rar and par files, though!

Shame it arrived so late in the game, because it was really an ideal match for floppies. Five disks of data, two... three... four? disks of parity, use 'em only if you need 'em.


High speed dubbing for the win


Some good times as a student in a big university and student housing system around ~2005 - file sharing was banned with the outside world, but inside the whole student network the p2p networks like direct connect were very much alive. Some of the best file sharing ever, so many good music collections to browse through.


I feel you. In the 90s I came into possession of a strange floppy disk with this on the label:

    Dear Friend,

    Please tell my story.

    Sincerely,
    Dave Koresh
After doing a thorough virus scan, I examined the contents of the disk. It contained a bunch of stories and essays by various authors in text-file format, nothing dealing directly with David Koresh or the Branch Davidians.

But the FBI raid of the Branch Davidian compound in 1993 caused... unrest in certain pockets of the American politisphere, primarily on the right but groups like the ACLU also got involved, and Janet Reno was mocked on SNL for her role in the incident. Nobody liked the Branch Davidians, they were a weird-ass cult preparing for an apocalypse that never came except maybe in a parrot sense[0], but there were concerns that the FBI (and potentially the ATF before) acted too aggressively, overstepping legitimate law-enforcement bounds and violating the Davidians' rights, and that caused a lot of political introspection: on our status as a freedom-respecting republic, the competence of our federal law-enforcement apparatus, what is the threshold beyond which the government legitimately could or should take action against weird-ass prepper cults. So there was a lot of philosophy and politics stuff in there, often in the form of USENET postings and other 90s internet copypasta, that dealt tangentially with those issues.

It felt... kind of weird and awesome to have and to read that stuff. Cyberpunk. Here were thoughts that people felt not quite safe transmitting or discussing openly, so they were copied and distributed as floppy-disk samizdat. It was a mysterious object, alarming at sight because it suggested that Koresh might still be alive somewhere (and the shortening of "David" to "Dave" made him seem... humbled?), whose contents held even deeper and more thought-provoking mysteries.

[0] "I heard tell once of a Jefferson City lawyer who had a parrot that would wake him each morning crying out 'today's the day the world shall end as scripture has foretold'. And one day, the lawyer shot him for the sake of peace and quiet I presume, thus fulfilling, for the bird at least, his prophecy." -- Daniel Day Lewis as Lincoln in Lincoln (2012).


Was transferring a 600KB (KiloByte) file to a friend. After 2 hours it failed. Tried again. Failed. Realized both times it ran out of hard drive space. I was 52 Bytes short.

I proceeded to walk a floppy over to his house 2 miles away.


git-annex by default will consider your data dead when it can't be checked in real time, but overriding will trust the data will survive offline forever.

I'd love for it to have some kind of expiry on it after which I need to fsck it.


> I'd love for it to have some kind of expiry on it after which I need to fsck it.

There is "git annex expire" (https://git-annex.branchable.com/git-annex-expire/).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: