> I can get 1 gb down but only 50 mb upload. Certain tasks (like uploading a docker image) I cant do at all from my personal computer.
As someone who used to work with LLMs, I feel this pain. It would take days for me to upload models. Other community members rent GPU servers to do the training on just so that their data will already be in the cloud, but that's not really a sustainable solution for me since I like tinkering at home.
I have around the same speeds, btw. 1Gb down and barely 40Mb up. Factor of 25!
I feel your pain, I haven't been in ML world directly for a few years now but I've done the same exercise multiple times.
The worst part is that block compression actually does not help if it doesn't do a significantly good job of compression AND decompression. My use case had to immediately deploy the models across a few nodes in a live environment at customer sites. Cloud wasn't an option for us and fiber was also unavailable many times.
The fastest transport protocol was someone's car and a workday of wages.
> The fastest transport protocol was someone's car and a workday of wages.
This is actually the entire premise of AWS Snowball: send someone a bunch of storage space, have them copy their data to that storage, then just ship the storage back with the data on it. It can be several orders of magnitude faster and easier than an internet transfer.
it would be totally cyberpunk to have a data cafe where you bring your hard drive to upload to the cloud and you'd pay by the terabyte/s. have all day? cheap. need to do it in 30 mins? pay up.
As someone who used to work with LLMs, I feel this pain. It would take days for me to upload models. Other community members rent GPU servers to do the training on just so that their data will already be in the cloud, but that's not really a sustainable solution for me since I like tinkering at home.
I have around the same speeds, btw. 1Gb down and barely 40Mb up. Factor of 25!