If you're reading this, FFmpeg developers, please accept my thanks for your work. You have become a "Category Killer"[1] in command-line video tomfoolery.
What is the current relationship between Libav and FFmpeg? I could never figure out which one I should support when there was a falling out, and things might even have changed since then.
When I say libav*, I and most people mean libavformat, libavcodec, libswscale, etc.; the C libraries that form the basis of the command line tool and are widely used elsewhere.
libav (no wildcard) is a fork of FFmpeg that is broadly focused on reducing bloat and cleaning up the API. As such, libav tends to add features more slowly, while FFmpeg generally follows all of libav’s new features and bug fixes as well as its own [1]. Debian was on libav for a while but went back to FFmpeg in 2015 [2].
> When I say libav*, I and most people mean libavformat, libavcodec, libswscale, etc.; the C libraries that form the basis of the command line tool and are widely used elsewhere.
Oh, so were those things called "libav" even before the fork, and were perhaps the origin for the name of the fork libav?
This is a release and the Changelog is reporting changes relative to the last release (3.4 series). HEVC NVDEC was added to the tree in Nov '17 and 3.4 was branched off in Oct.
I was trying to do something the other day and couldn’t figure it out, if anyone has any ideas.
The end goal is to provide a set of video files, with time stamps for each, splicing them into one file while removing parts I don’t want.
That is straightforward enough, as long as you’re willing to re-encode the whole file. Otherwise, it seems like ffmpeg is restricted to make cuts at key frames.
It’s rare for the key frame to be placed at the exact spot I would want to make a cut, so the section of the video around the cut would need to be re-encoded. Ideally that would be the only part hat is re-encoded - everything else would be a a straight transcode from key frame to key frame.
I believe this is called ‘smart rendering’, and the pages I could find in the past said ffmpeg isn’t really suited for it, or it’s very difficult.
Does anyone know if that has changed recently, or have found a way to do it?
Depending on the container format, you may not need to re-encode anything. .mp4 supports "edit lists". You can create a .mp4 file that starts at the latest key frame <= the starting timestamp of interest, onward through the ending timestamp of interest. And has an edit list that tells the player to skip the unwanted prefix. You can have arbitrarily many of these in one file. I do this as part of a larger program (security camera NVR), although directly writing the .mp4 rather than instructing ffmpeg to do so.
Afraid I don't know how to do what you want with the ffmpeg commandline tool, though, either by partial re-encoding or by edit lists.
Yes, this is possible, depending on the codec and container. I have done similar operations with h264+mp4.
It's good to be able to edit video without losing quality.
Are you sure you need sub-keyframe precision? In h264+aac+mp4, for example, if it's not keyframe aligned, the result is usually a stalled video frame for a split second, but since the audio continues smoothly, it's not that noticeable.
If you know the exact codec settings that were used to encode the video, you can create new pieces to be fit losslessly together. Otherwise, it is more difficult.
Contact me on twitter at @downpoured and I can describe more.
Is anyone follows https://libav.org development? I was under the impression they merged back with FFmpeg when
Michael Niedermayer resigned as leader. Now I see they still make their own releases. So merge ultimately did not happen?
I hope Ubuntu gets better at updating FFmpeg by bringing it in from the "universe" category of unsupported packages. Or second best option, stops shipping it.
Just this week there was an update showing that they had nearly a year-long window of vulnerability due to out of date version[1].
A media format christmas tree like this has really a lot of vulnerabilities & exposes the user to them fairly directly through media files.
FFmpeg has been an amazing tool. I don't know if this is helpful, but using static linked builds has been a big time saver for me. Patent issues can make it tough to get a feature complete install. The ones below have worked amazingly well.
Depends on what you classify as relevant targets. All big hardware companies have been onboard since the begging and probably already have prototypes of fixed-function decoders. Chances are we'll have consumer hardware with such decoders sometime next year.
If you actually go on the AV1 spec issue tracker, there are issues (both closed and open) from people at Nvidia, ARM's hardware team, Google and Netflix.
Unfortunately is seems that Roman Arutyunyan has not been able (or willing) to keep up the development of nginx-rtmp-module. Thankfully, Mr. Sergey Dryabzhinsky has a fork [0] that has added a lot of nice new features (EXT-X-PROGRAM-DATE-TIME!) and some bug fixes.
Sounds great. Is there any meaning to Linux computers that don’t support aptX? Also I am wondering how it is posssible to include the aptX codec since its license term is against GPL?
Thank you ffmpeg contributers. I want to let you know the famous Xzibit entrances video (https://youtu.be/2dkN0YIBjEM) was made in no smart part thanks to ffmpeg.
That issue is about supporting Vulkan on macOS via MoltenVK, but mpv already supports Vulkan on Windows and Linux.
The problem that I think the parent post is referring to is that mpv 0.28.0, which introduced Vulkan support, also introduced a hard dependency on FFmpeg APIs that haven't been released until now (4.0). Linux distros prefer to use stable versions of packages, so most of them have been packaging FFmpeg 3.x and mpv 0.27.0. They can only upgrade to mpv 0.28.0 (with Vulkan support) now that FFmpeg 4.0 has been released.
Found the reason, sigh: "After thorough deliberation, we're announcing that we're about to drop the ffserver program from the project starting with the next release. ffserver has been a problematic program to maintain due to its use of internal APIs, which complicated the recent cleanups to the libavformat library, and block further cleanups and improvements which are desired by API users and will be easier to maintain. Furthermore the program has been hard for users to deploy and run due to reliability issues, lack of knowledgable people to help and confusing configuration file syntax. Current users and members of the community are invited to write a replacement program to fill the same niche that ffserver did using the new APIs and to contact us so we may point users to test and contribute to its development."
Thank you for pointing me to gource, but I wanted to understand it as a general approach which would be a better build it via libffmpeg or opengl.
I'm keen on building something, and extending to other use cases like embedding photographs, milestones and other major events involving our business unit.
Generates a histogram of pixel values in a frame and then,
in normal mode, calculates a (weighted) measure of the variance in pixel values.
in diff mode, calculates a (weighted) measure of the variance in differences of pixel count between two neighbouring values (if 800 pixels have value 112 and 1400 pixels have value 113, then the (abs) difference is 600)
Has anyone ever written an ffmpeg script that could break a video apart into interesting cuts?
Someone posted a brilliant script in one of these ffmpeg posts but I can't find it for the life of me. I used it to create "trailers" of my media collection.
There's actually several scripts: burn the subtitles into the movie as hard-subs, extend the subtitles by 1 second, make clips of each subtitle, make headings, and combine the clips with the headings.
These are my rough notes I made at the time (you could skip the Pingtype steps if you're not trying to make bilingual language learning material).
Here's my attempt at building something for language learning since my listening skills trail so far behind my reading skills: https://www.danneu.com/slow-spanish/
Unfortunately it's really hard to generate the source material (timestamping a transcript).
So my idea was to upload some slow-speaking audio to Youtube and let it autogen its .srt subtitle files. The subtitles don't come out perfectly, but it's the timestamp data I'm after since the goal is a UI that makes it easy to replay and scrub around spoken audio.
Using YouTube to generate the timestamps is a really good idea!
I'm manually recording timestamps while I read/listen to the Bible, verse by verse. Every time I click pause in Pingtype's Media Viewer, it logs the time. It's painstaking, but I'm trying to study each verse while I read anyway, so it's good to let me pause regularly.
There's a lot of LRC data for songs that are used in KTV/Karaoke. You just need to find a good data source for Spanish. In my opinion, listening to music and singing along in church helped my Chinese much more than textbooks. I still lack confidence speaking, but my listening improved a lot when my regular playlist became majority-Chinese (I listen to iTunes all day).
[1] http://www.catb.org/esr/writings/homesteading/cathedral-baza...