You're not the right person to ask about this, but if Drew is around I would love to hear high-level details about what how this setup works and what the average monthly cost is.
I can see that it's open source[0], and I'm very tempted to copy it. I'm already in the midst of migrating all of my video hosting to peertube, but I don't have a solution I'm confident in for livestreaming other than Twitch -- especially because when in the rare instances where I do stream coding sessions they can go up to 5 or 6 hours, at which point archiving and storing that video starts to look a lot more costly.
I don't use it very often. I just threw it up on a Linode with minimal effort so I could have a working live streaming setup. You'll note from the readme:
>This is the website for my self-hosted livestreaming platform (aka bag of hacks dumped into a server).
PeerTube is nice in theory but in practice it's been really really unreliable for me.
Archival/storage of video shouldn't be that costly.
With 5400rpm drives (better for archival than more or less any other type of storage media, including faster hard drives), it looks like the going rate is about a United States cent per gigabyte. Two for 7200rpm drives from manufacturers that seem to produce the most reliable drives on the market, consumer-side.
A setup that could survive through a reasonable amount of drive failure, then, seems to be relatively inexpensive, so long as you're not trying to archive your video In The Cloud®.*
Delivery being costly is a myth propagated by Big Cloud®. Any dollar store VPS that isn't DO will have more than enough for streaming video all day every day.
That's irrelevant, though, given the person's question was about storage and archival.
It all depends on the use-case/context. Hosting a single video with few concurrent views is cheaper to do on VPS. Hosting videos with short response time in any region, with high resiliency, etc. is likely cheaper to do on CDNs. It's not a myth. It's "general advice may not work for you".
Not sure what you mean by false advertisement. Australia-Netherlands (common European pop) connection is often >300ms from a home connection. Home in Australia to Sydney pop is likely <10ms. It makes a massive difference with many small resources, or restarted transfers. That's just physics at some point.
But does it really? I can see why a large company wants to squeeze milliseconds out of asset delivery, but as a watcher of a small independent creator I would have no problem waiting a second for the video to start playing.
Latency directly impacts bandwidth, which impacts quality, since all current-gen user-facing live streaming protocols that matter (HLS, DASH) are layered on top of HTTP (on top of TCP), and that's already the best trade-off for end-user delivery today.
For VOD it's less of an issue since you can just maintain a larger buffer, but with live that's a trade-off with being closer to the live edge or choosing poorer quality. It works OK for some cases, it's bad for others (like sports, or when letters on the screen become illegible due to compression artifacts).
Building your own CDN off of el cheapo VPSs is theoretically viable, the beauty of HLS and DASH is they're 100% plain old HTTP, so just drop Varnish, add GeoDNS on route53 and off you go. Actually I'd love to have the time to try that :)
Here, the roundtrip latency is ~14ms within the country (e.g. from here to capital city), and 40ms to the closest AWS or GCP datacenters (both are in Frankfurt).