At Jumpshare, we use FFmpeg for screen recording. We noticed that the previous version of FFmpeg was not DPI aware. So we went ahead and fixed it. Now FFmpeg will show correct mouse location in hdpi screens. Unfortunately, it seems FFmpeg 3.0 does not ship with this fix. Nevertheless, we're happy to contribute to this open source project.
First time that I hear about your service and it seems like a copy of Dropbox but with way more features. (the interface)
Just a suggestion: Jumpshare Plus link should either be at the top or renamed "Pricing" because you don't see it directly and the usual ctrl+f of pricing gives nothing.
Plus your pricing is nothing to be ashamed of :)
Hi, thank you for the feedback and suggestions. We will make sure to include the pricing in the new homepage we're working on. :)
By the way, we're more about quick sharing than syncing. We will be overhauling our homepage to make that clearer. Here's the app if you're using a Mac (Windows app is coming soon): https://itunes.apple.com/us/app/jumpshare/id889922906
At it's core, Dropbox is about "syncing". But at Jumpshare, the core is "quick sharing". This makes all the difference. We're able to build our product around the quick sharing aspect, for example, our sharing happens in real-time so you don't have to wait for uploads to finish first. And we offer a slew of built-in tools (capturing screenshots, annotations, recording screencasts, etc) and features to supercharge your sharing.
Okay, but what happens when Dropbox makes a minor upgrade to their sharing, making it also "quick" or real-time sharing? Already, when uploading a file to Dropbox, it can be shared and partially downloaded before it's finished uploading. Do that with an image or video that was just recorded, or that is being recorded, and that seems like real-time sharing to me.
And doing that with Dropbox means that, after it's been shared, it's also synced, mirrored, and backed up for you.
Nope. I can't take credit for the illustrations, but I might have to get in touch with Nathan since there is some nattaylor out there who likes to put my gmail address for all of their online services.
Thanks to all the FFmpeg contributors! Fantastic piece of software.
On a project I was recently on recently we started hitting the per-region concurrent transcode limits on Amazon's Elastic Transcoder. [1]
Instead of sharding over pipelines or accounts we set up a pipeline with FFMPEG + Lambda functions and it performed fantastically (within the free tier even).
It was incredibly simple to write the functions and has given that project a lot more freedom; with the caveat that the any single task you undertake should occur within the timeout window (currently 5 minutes). Having said that, it's also straight forward to split the process into steps and have multiple lambda jobs to make the flow more of a pipeline.
Did you try simply asking AWS to raise the limit? It even suggests so on your linked page.
In my experience, every limit is immediately relaxed when requested; number of VPCs (I see people do horrible things to work around this all the time! Just ask!), EC2s / region, SES limits (need to send 10 million emails / day? No problem!), API Gateways / account, total ASGs... I believe all of these are there to keep you from shooting yourself in the foot through automation gone wrong or inexperience.
I've seen some crazy complicated architectures, where just sending an email or lifting up the phone solves the thing within an hour.
I've been surprised/impressed by their quick and painless limit increases. It makes sense to have low default limits so people don't accidentally spin up a thousand instances or send a million emails. It seems the limits are mostly there to protect you from a bug or test in early development costing you a bunch of money.
>I believe all of these are there to keep you from shooting yourself in the foot through automation gone wrong or inexperience.
Also to prevent you from racking up huge bills in case an api key is compromised, and the attacker is able to spin up tons of instances for a botnet or something on your dime.
Yes, that's one that you may not be able to change. From memory, they put that one in place to prevent the equivalent of bucket name squatting (since every bucket has a corresponding public domain name).
An interesting thing to look into might be using the (apparently Kepler) NVENC capabilities present on EC2 G2 instances.
For $2.60 an hour, you get 4x Kepler GPUs that can handle ~4x realtime 1080p encodes each (120fps per GPU), or 16x realtime 1080p encodes total (480fps). To convert this into rather odd units, that works out to ~1.37Tpix/$ (1920x1080 x 480fps x 3600s / $2.6). Put this on reserved instances and that number is pushed up to ~2.2Tpix/$.
According to [1], ffmpeg + x264 performance on the most cost effective instances (c3.xlarge) was 20s for a 30s, 960x540 video, or roughly 0.5Mpix at 45fps. That's 83Gpix for $0.21, or 0.399Tpix/$ at spot prices or 0.672Tpix/$ at reserved prices.
Depending on how much you care about your compression quality (NVENC isn't quite as good as x264 veryslow but it's definitely usable, particularly with its "two pass" preset), it might be worth a good look at the GPU encoders.
I work on a project where ET isn't flexible enough (MPEG-DASH), and wondered whether Lambda would make for a good alternative to EC2 + SQS + Scaling Groups.
Could you share your experience with FFMPEG+ Lambda ? I ran into trouble with this when dealing with large files, especially when some of the files were being pulled off non S3 sources. Also what EC2 cores were you using. ?
I don't know much about video transcoding, but if FFMPEG can utilize streams it's easy to work around the lambda size constraints.
You can process several GBs in the 5 minute window by piping your S3 download stream through your transformation steps then directly into an S3 upload stream. Nothing ever persists to disk, so your only worry if anything is managing your stream buffers so you don't run out of memory.
As long as any single step of your pipeline doesn't exceed the time limit, you can make really nifty pipelines for large file processing by using the S3 upload as "temp space" then an S3 event to automatically trigger the next step of your pipeline.
ffmpeg can utilize streams, in both input and output. The trouble comes from different codecs and containers, especially on output. Some formats aren't append-only—the prime example being MP4 + h.264—and so ffmpeg needs to be able to write to a seekable output device, ruling out streaming output in those cases.
Wow... I would truly appreciate if you could share a bit more about your specific setup. I found myself working on a new project yesterday that I was really really excited about, until I saw the costs to transcode video.
How does doing all this in-house compare price wise (say, per minute), compared to using elastic transcoder?
Edit: The ultimate lowest cost I can find is $0.0125-0.015
The key point was missed: we were dealing with very short, small videos.
If you are dealing with longer or large videos, it's simply not feasible on Lambda.
As for costings, unfortunately I cannot retrieve them as this project was mid last year and I've since moved on to other clients. They can be calculated though with a few short tests I'm sure.
I know this has been a constant question (in the lines of "Should I go Python 2.x or 3.x?")...but I feel the need to ask it again on the event of a major point release for ffmpeg...but how are things, pragmatically-speaking, in terms of libav vs ffmpeg? I had thought that libav was the new way a few years ago and have more or less been using it on OS X...but now I see that Debian recently switched back to ffmpeg [1]...What are the use-cases for sticking with libav these days? I'm almost sure I started using libav because it was promoted as a concerted effort to create a better API. But by some accounts, ffmpeg has been incorporating libav's changes...and I honestly don't use libav or ffmpeg enough, directly, to really benefit from a better API. And installing both, I believe, has led to a few subtle errors when using libraries that wrap around either.
So, any reason for the casual graphics developer to install libav?
> libav [...] promoted as a concerted effort to create a better API
True, but that was biased and unfair. Some developers leveraged their Debian influence to get Debian to switch from ffmpeg to libav, but the technical merits were debatable. In the end, they came back to ffmpeg.
This is mostly a political issue. Software-wise AFAIK ffmpeg has been integrating many changes from libav but the opposite is not true, making IMHO ffmpeg the right choice.
The github link is on mpv wiki. mpv is a descendant of mplayer and mplayer2 (the latter being mostly dead). IMHO mpv is the best media player for any OS (lightweight, snappy, reads everything, better options and CLI than mplayer*, etc.).
Let us not forget the reasons for libav. The ffmpeg development process was having a lot of problems due to very controversial decisions that its lead dev was taking. The libav fork has resulted in a restructuring of the ffmpeg development workflow. In this regard, libav is about as important as egcs was to gcc.
> very controversial decisions that its lead dev was taking
I have seen this claim frequently, but have never seen an actual list of such (and that link doesn't supply one). I get they didn't like the guy, but what were the terrible things he was supposed to have done?
It's interesting to note the parallel, but there are a few differences between ecgs vs gcc and libav vs ffmpeg.
Perhaps the most important is that the ecgs fork announcement [1] was very diplomatically worded, intended to put an end to any bad feelings on either side, and recognized that FSF was completely in their right to be conservative when it came to developing gcc. Another difference is that ecgs really took off and eventually became the official gcc; libav doesn't look like it's doing the same.
Software-wise, ffmpeg is the more feature complete solution, obviously. But if you want a morally and ethically okay solution, with a cleaner codebase (but also NIH syndrome), libav might be the better solution.
The same people who use free software for moral and ethical reasons would also choose libav.
The way the ffmpeg maintainer behaved – as malevolent dictator – in contrast to the more open development approach of libav, is a pretty big issue, don’t you think?
IMHO the hostile takeover of the ffmpeg project by the libav guys (Fabrice Bellard had to wield trademark to force them to rename the fork) and intense FUD campaign were much bigger issues.
I’m mad that a person, who bought a trademark for a project, then decided to act against the interest of the majority of the participants of the project,
Let me get this straight, your mad because Fabrice Bellard, the person who started ffmpeg, asserted his trademark on the libav folks because their fork initially used the name ffmpeg?
Seeing as your the maintainer for QuasselDroid, How would you like it if a group of contributors wanted to take the project in a different direction then you, so they fork it, call their fork QuasselDroid, and then say your branch is immoral, like you have throughout this page, I doubt you would enjoy this, and if you owned the QuasselDroid trademark I'm sure you would use it too.
With all due respect, you are not answering the question that the parent poster asked. If someone created a hostile fork of QuassalDroid, and made decisions that you disagreed with, I doubt you would be OK with them using the same name for the project. The right to fork is fundamental in open source, but there is no right to present someone else's work as your own, or to confuse the general public about which version of a software package they are downloading. People should be able to decide for themselves which software to download, not be fooled by someone passing off something different as the same thing. That's why trademarks exist. Enforcing trademarks is not bad or wrong.
Trademarks can be held by an organization, not just by one person. This is how Apache software works, for example. In that case, there are bylaws in place to ensure that the interests of different people are represented, decisions can be made fairly, and toxic people can be prevented from killing the project.
In contrast, projects such as Python have a "benevolent dictator" moderl where one person has the final say about the direction of development. There is nothing unethical about a BDFL model in open source; it's just a choice that a community can make.
You seem to be deliberately confusing yourself about the distinction between forking, which is always allowed, and representing your fork as the original project, which is never allowed. If you are still confused, think about it this way: would you want someone to attach a bunch of malware to your project and redistribute it under its original name, as if it were your version? You can't prevent this without trademark law.
If you're going to fork, you have to accept that you have exactly the same burden on you to keep up to date on security patches etc, at the very least. As a number of parties found, libav wasn't doing that, and regardless of any moral or ethical argument (for which I've mostly seen accusations and no actual evidence.. I'm largely taking it on face value that there were issues), security trumps pretty much everything.
Sorry, what's wrong with using free software for moral and ethical reasons? I often do so because I don't feel like paying nor stealing commercial software. However, being so dependent on OSS has made me appreciate it and want to support it in what ways I can -- call it a moral imperative. Besides contributing bug reports and patches, I sometimes like using new libraries (or edge versions of existing software) if the creator, working freely, is trying to move the ball forward...having users who can provide feedback is a sort of moral support.
In the case of libav...as an admitted casual, I'm thankful that ffmpeg exists, even if its API confuses me...I'm grateful enough to think that the status quo is just fine, whether I can rationalize it or not. However, I do find it admirable that some people (ostensibly) wanted to make what they think were forward-thinking changes, including doing the kind of cleanup that is generally under-appreciated and under-prioritized in all software.
So if they're promising a transparent, interoperable interface...sure, I'll give it a try, and it will be for "moral" reasons in the sense of moral support. I've done the same with MariaDB (over MySQL) and haven't regretted it.
What's wrong with it is that FFmpeg and Libav are on equal footing in that regard; so using that argument in favor of one over the other is... nonsensical.
Nothing is wrong with that, but in a situation where one malevolent dictator acted against the will of every single other member of the development team, and forced them to fork, it’s hard to argue that his version is the moral one.
Thats nice one:
- Libav
- Pretends FFmpeg doesn't exist, though sometimes merges individual patches.
- FFmpeg
- Pretends Libav doesn't exist, but merges absolutely everything it does. Sometimes with consequences; for example there are now 2 prores decoders, and 3 prores encoders.
FFmpeg on Linux supports QSV either through the h264_qsv encoder or through some soon-to-be-merged va-api changes. On Mac I think you need to use the VideoToolbox API to access the GPU codec, and there is support for this in FFmpeg as well, but I haven't used it myself.
I use ffmpeg for housekeeping stuff like converting videos from one format to the other, and cutting clips - mostly from the command line. Can some advanced users share if there is anything to look forward to with this release? Better performance? Some convenience features? Thank you in advance
From the list given by @imaginenore the major one for me is CineformHD support. We work on a lot of VR stuff and there are quite some GoPro users out there that generate material in this codec. Not having to transcode to an intermediate is nice. Also hardware acceleration is always good to have.
FYI, the phrase "quite some users" is not uncommon among (continental european?) non-native speakers of English, but it's not correct.
"In the British National Corpus, for example, most examples of quite some are "quite some time", others are "quite some distance". If you replace "quite some" with "a considerable", the meaning should be clear.
If the sentence does not make sense when you do that, it's likely that "quite some" is not being used properly."
I love FFmpeg. I first used it to help with uploading 700 audio files to youtube years ago. Of course youtube is video only, so I used ffmpeg to reencode the audio with an image slideshow as video and then uploaded the "videos" using some web scrapping with perl.
More recently I have been downloading programming framework tutorials (android development, django, angular,ect) from youtube to my plex media server. I then go back with ffmpeg to reencode the vids to playback 50% faster. So now I can blast through tutorials on my TV while I eat lunch (I work from home mostly)
Yes, those are players. But I don't want to sit in front of my computer or cell phone while on lunch break, I want to sit on the couch in front of a TV.
Natively, neither plex or roku allows videos to be speed up, so they have to be reencoded at a different speed.
If ffplay supported hardware decoding, it'd be the perfect player. You could not make a more minimal player. It does not, and it doesn't seem to be high on the priority list, rather in last position perhaps.
Official docs say that it's competitive at 128kbps, but these[0] listening test from Kamendo2 (who's very experienced in ABX listening tests of lossy codecs) suggest fdk-aac still has the edge, as well as handling VBR and the HE-AAC / HE-AACv2 profiles properly.
I haven't done or seen any tests, but I suppose if you require VBR and/or HE-AAC support, go for libfdk, otherwise for bitrates ~128k or higher, use the internal AAC encoder.
On a related note, how do those options compare to Vorbis and Opus, technically and legally? Is there a compelling reason to use AAC over those choices?
Here's the fix if anyone is interested: https://github.com/FFmpeg/FFmpeg/commit/00c73c475e3d2d7049ee...