Hacker News new | past | comments | ask | show | jobs | submit login

I’d like to point out that this still has deficiencies compared to Vimeo or YouTube, who transcode the source file into multiple bitrates and adaptively serve an appropriate quality based on the viewer’s available bandwidth and screen size.

This manifests as buffering on slower connections, where YouTube or Vimeo would just downgrade the user to a lower bitrate transparently (or nearly transparently).

An end user today will expect that behavior and have very little tolerance for buffering if their connection is unable to smoothly play the one bitrate the creator published (no matter how fast the CDN is).

Edit: I’m aware that it’s possible to do adaptive bitrate streaming outside of using Vimeo or YouTube (as several commenters have explained below) however this isn’t what TFA describes and I think it’s important to note this deficiency in the author’s described approach to serving their own video.




If you want this functionality you can encode the video into chunks at multiple quality levels using video2hls[0] and host them anywhere you like[1].

[0]: https://github.com/vincentbernat/video2hls

[1]: https://ryanparman.com/posts/2018/serving-bandwidth-friendly...


I used MPEG-DASH to build a netflix-party clone when the pandemic started, mostly following this tutorial: https://www.isrv.pw/html5-live-streaming-with-mpeg-dash. It could easily be adapted to on-demand video streaming.

My solution was pretty janky, but it's an interesting problem space to explore.


You can also use ffmpeg to do this, but figuring out the correct command line options is not easy.


video2hls looks awesome, thanks for sharing that! I usually use AWS MediaConvert[0] to do the conversion for me, but nice to see an open-source tool to easily do the same.

[0]: https://blog.metamorphium.com/2020/07/20/diy-video-streaming...


HLS is half of the answer, that will only work of half of all devices. You need HLS + MPEG-DASH.


For browsers that do not support HLS you can use a JavaScript HLS client implementation[0] using Media Source Extensions[1].

[0]: https://github.com/video-dev/hls.js/

[1]: https://developer.mozilla.org/en-US/docs/Web/API/Media_Sourc...


Isn't this super battery intensive?


I can't think of a device that would support MPEG-DASH that wouldn't support HLS. What are some? HLS has been around for a decade.

I think you're thinking of certain DRM mechanisms. HLS with FairPlay wouldn't work on an Android device running Chrome, just as iOS Safari doesn't support Widevine. But for most of us, we don't need DRM.


What? that's the opposite of the truth.

Firefox, Chrome, Edge, IE, do not support HLS.

https://caniuse.com/#feat=http-live-streaming


Sorry, I didn't think we were talking about native support. I'm just so used to shipping something like videojs.

In any event, nothing natively supports MPEG-DASH. https://caniuse.com/#feat=mpeg-dash Though there is a disclaimer "DASH can be used with a JavaScript library in browsers that doesn't support it natively as long as they support Media Source Extensions." the same is true for HLS, without the requirement of MSEs.


What you're referring to is HLS, HTTP Live Streaming - where HTTP-based adaptive bitrate streaming.

BunnyCDN which the author is recommending does support the HLS protocol. What isn't stated is if there's any specific steps to achieve that.

You're right though, the author didn't factor this as a requirement to their needs.


Just want to point out all CDNs support HLS for on-demand assets. It’s just static http files. With HLS you just encode your file into multiple different nitrate files which are then split into small 5-10 second chunks and then wrapped in a text playlist for consumption. The protocol is still just static assets over http though.


Also of note: HLS supports byte range addressing, so you can create the various stream files as single .ts files rather than a collection of segments per stream. A client can use the Range HTTP header to select the window of bytes it wants for the stream/bandwidth slot it wants. This mode is supported from something like iOS 5 and up, and ffmpeg has flags to produce such streams.


If you want good cache hit ratios, you probably want to use segments as files rather than segments as byte ranges. Off the shelf http caching software tends to have filesize limits and may not cache large files in ram or even on disk if they're large enough, and may do unexpected things with range requests like request the whole file, then serve the range to the client. CDNs may be running totally custom http stacks or off the shelf stacks with tuned configs or somewhere in between, but if you intend to use one, it makes sense to build your content so it'll be easily cached.


All CDN use byte-range caching too.


Everyone is replying to you mentioning HLS, which (as your edit mentions) isn't described in the article.

Which is... a bit shocking. From the title of the article, I assumed the content would be related to hosting video. HLS was the first thing that came to mind and I assumed it'd be mentioned up top. I clicked to see if it listed any HTTP-related considerations I wasn't aware of in addition to HLS.

Turns out it's just an advertisement for a (very non-video-specific) CDN service, with some lines tacked on at the end telling you to use Handbrake (oddly, the cli, rather tham ffmpeg???)


It is a tutorial for stringing together a bunch of tools to transcode a video to multiple formats, upload it to S3, and pull that to a cheap CDN (that does get complimented heavily).

And it spawned a great discussion about self-hosting.


Handbrake CLI is great and much more user friendly than ffmpeg


Handbrake crashed me twice while ffmpeg did several tasks without even locking.


You can use HLS with the described CDN, since it's just http.


I wonder if that really what users expect? I hate auto-downgrading, I always set quality to the highest option and I'm happy to do other work while video is downloading and buffering.


If I'm watching a movie I probably want to quality to remain reasonably high, if I'm watching a Youtube video of people talking or something where visuals don't really matter I'll take 480p over buffering.

Youtube is pretty decent for that, you can either let it figure out what format to use, or force the resolution. That's a good compromise IMO.


Video quality is more important for coding instruction than movies, imo. If you can't read the code or gets blurry, the feed becomes nearly worthless.

I agree that here auto-bandwidth adjustment is the wrong answer here and would not be appreciated by the author's paying customers.


I've pretty much switched over to youtube-dl for video viewing, as I find it to be superior to the web interface in most ways. I get the highest-quality file every time, instead of what their AI thinks I want. I can easily request audio-only or video-only. Download once, watch over and over without using more bandwidth each view. No suggested videos. No recommendations. No comments.


It can be. Especially users on mobile connections which can have quite low bandwidth caps.


Adaptive delivery could easily be added by converting the files with a video encoding service. Transloadit’s community plan is free and should have enough traffic included for these use cases. Full disclose: I’m a transloadit founder


I hope you can enable doing the vice versa some day. I mean HLS/DASH to single file containers like MP4. It seems that you can't pull segments from remote URLs in M3U file for the time being.


Convert HLS/Dash to a single file, what's the use case? Most of the times you'd still have the original after creating segments from it you'd think?


You can input an HLS stream and output whatever you want as a single file with ffmpeg or even vlc.


I'd too like to point out if you're serving any video with text in it i.e. programming, this is a worthless feature to worry about.

I've turned off too many 720p-only talks because it's unreadable.


> I’d like to point out that this still has deficiencies compared to Vimeo or YouTube, who transcode the source file into multiple bitrates

This may be generally true, but for any screencast (including the author's use cases), automatically downgrading to a lower quality video based on connection speed is a bug, not a feature.


Would cloudflare stream solve this problem though?


Looks like it would!


>This manifests as buffering on slower connections, where YouTube or Vimeo would just downgrade the user to a lower bitrate transparently (or nearly transparently).

As a user, I _never_ want to watch shit quality content and _MUCH_ prefer buffering to it.


I admit I too wish I could buffer video.

Back in the day, you could open a tab, hit pause, go back to whatever you are reading, and by the time you finish that article and maybe grab a cup of coffee, the video is fully loaded and you can watch at your leisure.

These days, all the websites are so smart that they realise that you are not on the tab and load nothing so even though the page is been open for 20-minutes, you still have the lovely privilege of sitting there staring at the throbber every so often, thanks to your work insisting on full VPN tunnelling.


> thanks to your work insisting on full VPN tunnelling

Do work in a VM on your main machine. That way you can have full VPN tunnelling in the VM but the host (and other VMs if you use them) use your normal direct connectivity.

Of course you might be breaking rules in a way that might invite discipline, if working in a highly regulated environment where so measures are essentially dictated by your work's clients, by taking technical measures to circumvent policy, so take care.


Or just get an iPad, Chromebook or the like for your media consumption.


Looks like we can assume a few things about the implied audience.

Highly (or highishly) technical people ready to pay for video courses. One can infer from that a they have a good enough connection.

Is not like everybody should be Facebook, and have his video content available for all devices and bit rates.


This can be easily solved by presenting an option set like "slow/fast/HD" to the user, then stream accordingly. Open source has always been about lots of choices anyway!


Just my anecdote, I'm annoying that non-live video service uses adaptive bitrate. I prefer waiting spinner or degrade quality manually than automatically downgrading to awful quality.


> An end user today will expect that behavior

The average end user maybe. Not all endusers - for most videos I would rather wait than get a lower quality.


Maybe I don’t understand this, but unless something is encrypted, why does the downsampling have to happen at the source?

Alternatively, is there a way to encrypt something, such that you can apply a transform to the encrypted data to downsample it without knowing the contents of the file?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: