Hacker News new | past | comments | ask | show | jobs | submit login
Make timelapses easily using FFmpeg
282 points by indiantinker 5 months ago | hide | past | favorite | 76 comments
I make a lot of Timelapse and have tried a lot of ways to make Timelapses using python etc, today I found the easiest one using FFMpeg :)

ffmpeg -framerate 30 -pattern_type glob -i '*.JPG' -c:v libx264 -r 30 -pix_fmt yuv420p timelapse.mp4




Where FFMPEG really shines is stabilising video.

Unfortunately not all versions have "vidstab".

ffmpeg -i "$1" -vf vidstabdetect=shakiness=5:show=1 dummy.avi

ffmpeg -i "$1" -vf yadif, format=yuv420p, vidstabtransform=zoom=2:optzoom=0:crop=black -c:v libx264 -b:a 32k stabilized264.mp4

Yesterweek's shaky video shot from a kayak: https://youtu.be/4pM0VeH4NE0?si=H2qTJfcvis3QmFlj


If you really wish to install all the available options, you can run:

brew install homebrew-ffmpeg/ffmpeg/ffmpeg $(brew options homebrew-ffmpeg/ffmpeg/ffmpeg --compact)


I took some side photos of my pregnant wife, hoping to make a time-lapse, but I never got around to it. The photos aren’t perfectly aligned since her position changes a bit in each one. Can ffmpeg fix that?


Yes. That is how you make lively picture show.

Sometimes it tries to align unrelated objects, but that is just funny.


Avi? What is this 2002?


Dummy.avi is useless map of perceived jerkiness.


How about

    $ ffmpeg-english "capture video from the camera every 1 second and write it to jpg files"
    $ ffmpeg-english "take all of the images ending with .jpg in this directory and make a 30fps timelapse of it"

Sound awesome? Here's the code for ffmpeg-english:

    #!/usr/bin/env python3

    import openai
    import sys
    import os
    import time

    # Ensure you have set your OpenAI API key as an environment variable
    openai.api_key = os.getenv("OPENAI_API_KEY")
    client = openai.OpenAI()

    def get_ffmpeg_command(task_description):
        # Call the OpenAI API with the task description
        response = client.chat.completions.create(
            model="gpt-4",
            messages=[
                {"role":"system", "content":"You are an expert in FFmpeg commands. Given the following English description of a task, you respond with the correct FFmpeg command. Do not include any other information in your response except for the command only."},
                {"role":"user", "content":task_description},
            ],
            max_tokens=150,
            temperature=0.5
        )
        command = response.choices[0].message.content.strip()
        return command

    def main():
        if len(sys.argv) < 2:
            print("Usage: ffmpeg-english <task_description>", file=sys.stderr)
            sys.exit(1)

        task_description = " ".join(sys.argv[1:])
        ffmpeg_command = get_ffmpeg_command(task_description)

        # some basic guardrails
        assert(ffmpeg_command.startswith("ffmpeg"))
        assert(";" not in ffmpeg_command)
        assert("|" not in ffmpeg_command)

        print(f"Executing command: {ffmpeg_command} in 2 seconds (^C to cancel)")
        time.sleep(2)
        os.system(ffmpeg_command)

    if __name__ == "__main__":
        main()


I love when my commands are not reproducible even in the same shell session.


Honestly, ffmpeg is awesome enough being able to do what it can with a one-liner..

And all without python dependency, or even internet access.


I took photos every few minutes for about a year. I wrote some custom code to pick out a set of frames from several consecutive days, but with the sun at the same angle in the sky. I then blended these frames together. This makes a ghostly look, without the bright flashes of cloud passing or weather changing. As I moved forward in time, I selected frames from the next angle in the sky, so it looks like only a day passes. You’ll see the shadows move as if a day passes, when really a year goes by.

https://youtu.be/RAsJE5ddt_U


Very cool! We tried to do the same thing with a series of photos taken along the solar eclipse, but didn't have enough density or consistency to make it compelling.


This is awesome. What kind of hardware are you using? If you don't mind.


Very cool.


I use a Raspberry Pi Zero with a Pi Camera and ffmpeg to make timelapses of plants growing from seedlings each season when we plant tomatoes/potatoes/etc. It's a great way to show the kids how different plants grow from seed, and such a cheap/easy project.

Bash script running on a cron job every hour with:

    DATE=$(date +"%Y%m%d%H%M")
    raspistill -o /home/pi/timelapse/$DATE.jpg -awb off -awbg 1.8,1.6
Then another bash script running once a week:

    DATE=$(date +"%Y%m%d%H%M")
    ffmpeg -framerate 12 -pattern_type glob -i '*.jpg' -c:v libx264 -pix_fmt yuv420p -vf 'scale=1920:-2,crop=1920:1080:0:(in_h-1000)' 'timelapse-$DATE.mp4'
    scp timelapse-$DATE.mp4 username@my.server:/home/user/
(Edited/simplified, I do a few other things in the scripts like compile the timelapse with pre-made timelapses and share via nginx on the Pi itself)


Been making timelapses with ffmpeg since forever, such a great tool. I try to always have a cam pointed at the sky and upload a snapshot every minute. A telegram command triggers the creation of a timelapse with a similar cli command like the OP. https://www.youtube.com/watch?v=5GvaFBzOu2c


Nice! What equipment do you use to capture? And does it push to some server... running RTSP?


I use an old Android phone with a long deprecated app called MobileWebCam (still available on certain sites) I removed the battery and connected the charger directly. It uploads a picture every minute, a Node.js backend creates the timelapse with ffmpeg. Currently experimenting with a TP-Link C520 cam and a Raspberry Pi. You can point the camera to different positions using ONVIF, use ffmpeg to grab the stream and take a snapshot, then process this again on the server. Downside is the wide angle / fish eye lens and occasionally a corrupted stream snapshot.


Thank you, was looking for a cheap webcam security set up using the great near-wasted cameras on older android phones.


If you want an all sky solution, zwoastro.com ships an all sky lens with many of their small sensor planetary cameras. There's also a selection of software to handle the photos, making time-lapse, uploading etc. It came in really useful for seeing the Aurora outburst. Here's a short write up I did for my local club recently: https://m.facebook.com/story.php?id=101640086556073&story_fb...

I'm also working on a new one that uses an ASI533 Pro and a 4mm lens from a mirrorless camera to get better quality images.


Nice, thanks! I tried using ffmpeg for a minor video editing task I had a few months ago - just a cut, crop, rescale, and volume adjust. I've tried a few of the mainstream GUI video editing tools, and IMO, they all have incomprehensible UIs, are way too bloated, and usually far too expensive for what I'm trying to do. FFmpeg may not be dead simple, but I find it much easier to skim the command line flag list to figure out how to do what I want. And once I do, I can save down a handful of useful sets of flags and refer to them next time. Cheers to ffmpeg, one of the kings of FOSS! If you ever feel the need to do any kind of video conversion or editing, definitely try to do it in ffmpeg first.


I recently found LosslessCut (https://github.com/mifi/lossless-cut) that is basically a GUI for ffmpeg, you can make simple edits without re-encoding the stream.


Indeed. I find it baffling how hard it is to just make lightweight edits to videos on Windows. At a bare minimum I would like to clip a video, or crop or change audio tracks. My cheat sheet of ffmpeg commands still remains to be the easiest way for me to do this.


There are a lot of free apps to make lightweight edits to videos on windows. One example is Avidemux. I have high-end video editors but Avidemux comes in handy more often than it should.

http://fixounet.free.fr/avidemux/

Another is ShutterEncoder:

https://www.shutterencoder.com/en/


Davinci Resolve has a free (as in beer) version that is quite capable and easy to use, even as someone who'd only used iMovie before. The only problem is that "how to do X in Davinci Resolve" has been taken over by slop.


Davinci Resolve is actually the first thing that came to mind on the subject of, okay it's free, that's nice, but I can't for the life of me figure out how to do anything in it. I suppose it's not necessarily their fault that the search results for how to do basic things are garbage, but I guess an advantage of CLI apps is how-to results for them don't seem to attract nearly as much SEOified clickbait.


I shoot RAW from an older Canon 5D which Resolve does not read natively. So there's a bit of a conversion step going from CR2. My typical workflow is to use Adobe RAW to process the images, then import the RAW directly to AE to render out with whatever repositioning or cropping. Let's not forget LR Timelapse[0] as part of the workflow too.

[0] https://lrtimelapse.com/


I was reading about how to create timelapses on a canon dslr and someone pointed out that each frame you take adds to your shutter count and consumes the lifetime of your shutter. Do you find that a problem? For example a frame every 2 seconds for 8 hours is 14,400 actuations of the shutter.


I worry about my shutter count like I worry about my HN points...I don't. If the shutter was to fail from "too many" clicks, it can replaced. It's not the end of the world. There are people that will point out that you can die every time you drive your car, but does that prevent you from driving your car? I easily have over 150k (I don't have the right cable/adapters available at the moment to confirm the number USB-C to USB-mini) to get actual number. Even if the shutter were to fail, I'd just have it replaced for less than the price of a new body.


To be fair, pulling out a professional video editor for small changes is like learning emacs to edit some config files. You don't need 99% of the features.

Also as an FYI to everyone, FFmpeg does support nVidia GPU acceleration but it might not be enabled in your build. So check if you use it a lot.


Probably true, but ffmpeg seems to have a ton of features too. It seems to me that CLI apps are inherently better at not distracting you with things you don't need. A CLI flag that you don't use is invisible outside of the man pages, not so for a menu or toolbar of a zillion options with names and icons you don't understand.


Blender has a very well done video editor, it's probably my favorite foss video editor so far. It also uses ffmpeg under the hood

Worth a try!


I found https://github.com/mifi/editly to be an intuitive frontend for this type of task - I used it to create a montage of several clips and was able to easily adjust parameters around timestamps and such to get the montage perfect


ffmpeg is such a great tool!

Be aware that -pattern_type glob is not supported on Windows, though, iirc. A workaround is to name your jpegs with consecutive numbers (not necessarily starting at 0) and use a pattern with a counter placeholder in it instead.


Also be aware of this infamous “bug”

> The sortedness of glob.glob's output is platform-dependent.

https://bugs.python.org/issue33275#msg315254

https://pubs.acs.org/doi/10.1021/acs.orglett.9b03216


At least in this case you'd find out pretty quickly!


Or you can use '-f concat' and specify a text file with the explicit order of image files to be used as input - no need to hope and pray that the wildcard will pick the files in the correct order.

see https://trac.ffmpeg.org/wiki/Concatenate or https://shotstack.io/learn/use-ffmpeg-to-concatenate-video/


Obligatory humor: https://youtu.be/9kaIXkImCAM

Note of support: ffmpeg supported many of the transcoding needs of my former employer back in 2007, being a "friendly" tool to the team. Yes it had/s issue. Being open source gave us a lifeline, to be able to fix our own stuff, and build up our video and audio live streaming and video watching white label service.


I use -pattern_type glob on Windows without issue. Not sure how long I've been using it, at least a year, possibly two. This is what I use:

    ffmpeg -framerate 30 -pattern_type glob -i '*.jpg' -c:v libx264 -pix_fmt yuv420p out.mp4


wsl --cd=%cd% ffmpeg -framerate 30 -pattern_type glob -i '*.JPG' -c:v libx264 -r 30 -pix_fmt yuv420p timelapse.mp4


Hmm, I wonder why `-pattern_type glob` doesn't work on Windows. Perhaps it is something that could easily be programmed into the source code?


If I were yo guess, it might be using the GNU libc (or compatible) glob functionality under the hood.


Correct, most annoying bug there is.


Just dropping this "random" ffmpeg command here, as a reminder that you can turn video files into actual transparent webm using

    ffmpeg -t 5 -i example.mp4 -c:v libvpx-vp9 -b:v 2M -filter_complex "[0:v]chromakey=0x00ff00:0.3:0.2[out]" -map "[out]" -map 0:a stop.webm`
where `example.mp4` is a video with the typical green background.

Maybe someone needs it :)


I recently wrote a blog post about doing this to create timelapses of Rimworld colonies. I didn’t realize -pattern_type glob didn’t work on windows though… I’ll have to update it.

Also, an assumption in your command is that all the images are the same aspect ratio. If they’re not, you can use this to dynamically pad it out with black bars on either size:

‘-vf "scale=1920:1080:force_original_aspect_ratio=decrease:eval=frame,pad=1920:1080:-1:-1:eval=frame"’

https://mpeyton.com/posts/rimworld_timelapse_ffmpeg/


Seems like there are a lot of experts here with FFmpeg, so perhaps one of you can help me.

I have a bunch of old videos that use MTS as the file extension. I would like to convert these to an MP4 (or MKV) file but I would like to do it while keeping the metadata (date taken, date created, etc.). Is there a way to do this?

The scripts I've seen change the file extension by just changing the container but they never keep the metadata which is a huge issue for me.


ffmpeg works really well for this. I have also been making time-lapse movies for over a decade, starting with using just Quicktime Pro 7 to combine the images into movies. Later using various other tools. Now I also use ffmpeg, and want to keep track of exactly how each movie is put together so looked for a way to script each movie. Now I use ffmpeg-python (not a lot of recent updates, but just works) to steer ffmpeg, with my own time-lapse specific Python package https://pypi.org/project/time-lapse/ to assemble the movies, and for each movie I have a single script which describes which frames are part of it and how they are combined into a movie; https://github.com/153957/time-lapse-scripts I am quite happy with the setup, making it easy to recreate movies in the future from source, but at higher resolutions or different codecs.


Somewhat unrelated, but a beautiful tool for extracting screenshots from video: MoviePrint

https://www.movieprint.org/ & https://github.com/fakob/MoviePrint_v004/


I use VLC for that.

I press shift-s to take a snapshot, not sure if that's a default hotkey or I set it to that a long time ago.

In the preferences you choose where it will save to and what format.


Does this add any interframe blur or are you controlling that based on exposure time ? Very important for quality Timelapse's


I looked into using ffmpeg to “compress” video podcasts by lowering the framerate a lot, but it didn’t seem to do as much as I thought (about 50% size reduction). The theory was that a video podcast is mostly talking heads with an occasional chart on the screen, so you really only need a frame every second, or five seconds.


AV1 exceeds at these type of videos. It's why so many anime people use it.

Try encoding the video to AV1 with OPUS audio. You'll get ridiculous gainz!

My command is:

    $ffmpegPath -i $_.FullName -r 23.976 -vf scale=1280:720 -c:v libsvtav1 -pix_fmt yuv420p10le -crf 30 -preset 10 -g 300 -c:a libopus -b:a 96k -ac 2 -c:s copy -map 0 $destPath


Thanks I will give it a try.


Reducing framerate doesn't help much when there isn't a lot changing between frames. Here are some better optimizations:

Noise reduction, so you compress less useless noise: -vf hqdn3d

Turn up the Constant Rate Factor. This will make better visual tradeoffs than decreasing frame rate. 23 is a good starting point for h264, but keep increasing it until you don't like how the content looks: -crf 23

Throw more CPU at the problem: -preset veryslow


it can also read frames from stdin, for example:

cat *.jpg | ffmpeg -r 90 -f image2pipe -i - -codec:v libx264 -preset fast -crf 23 -vf format=yuv420p video.mp4


FFMpeg really helped me out not too long ago. I tried KDenLive and ShotCut to edit some videos, which I rarely do, only to be overwhelmed and then discover that ffmpeg command can do everything from timelapses to trimming and brightness/contrast adjustments. And you can "preview" the result too, using ffplay.


Is there a variant that encodes ProRes lossless?

I usually open them up in a new project just to create a lossless input video to work with in After Effects, and use that (if I use image sequence directly, DaVinci Resolve acts in weird ways).

ffmpeg might ease that AE part.


FWIW, ProRes isn’t a lossless codec (tho it should be perceptually lossless in most cases).

Ffmpeg can encode into ProRes, but it’s technically an unofficial implementation.

What issues do you run into with image sequences?


Right, calling it technically lossless is wrong but 422 HQ gives impressive results so I we can probably safely say that it is "practically" lossless.

In DaVinci Resolve, many tools go awry with image sequences: for example temporal noise reduction simply doesn't work with a compound clip with image sequences, I also remember having problems with caching performance. I have a few very strange/buggy problems with Resolve though I love color grading with it regardless, so I want to avoid the buggy sides of Resolve (as it's one of the buggiest software I've ever used) but use the upsides of it (grading and some OpenFX filters).

It would help me on this process.


ffmpeg has 2 prores encoders


You can also glob with c printf style format.

    ffmpeg -framerate 30 -i image-%05d.jpg -c:v libx265 -crf 35 -pix_fmt yuv420p timelapse.mp4
Will get images of the form image-00000.jpg, image-00001.jpg, ... image-99999.jpg.


I wanted to print out one of those flipbooks I had as a kid, where the frames are printed and as the pages are flipped it looks like a movie.

Is that something ffmpeg could do?

Is there any good resource for recipes like these?


try

    ffmpeg -ss 00:01:00 -i input.avi -t 30 -vf "fps=1,scale=320:-1:flags=lanczos" output_%04d.jpg
00:01:00 is where to start the flip book, 30 is thirty seconds worth, and 1 fps is how many frames per second. this'll make output_XX.jpg from the Avi which you can then print


What does -framerate do here? The manual is not turning up anything, other than it's similar to -r for certain input formats.


It sets the framerate for the input video. The default for the image2 demuxer is 25.


Oh I see, I had assumed that they'd want a much slower speed, but they actually show 30 different input images per second in the 30 fps video.


Maybe someone should collect all those commands and create a website or a gist that list them with search possible. They are gems !


I'd recommend Da Vinci Resolve for making timelapses. It performs really well and let's you scrub through before rendering anything which lets you clip just the part that you need. Plus you get the benefit of high export quality which can be fiddly with ffmpeg.


Resolve is insanely heavyweight for such a simple task. Those video editor UIs are incredibly hard to understand for people not using them every day.


I would argue that ffmpeg has the hardest to understand interface in the video editing space (unless you can find a command someone else came up with that does exactly what you want)


why is high export quality fiddly with ffmpeg?


there are a bunch of flags to get exactly right in order to get it to give you a high quality image out. there are wrappers to do this more easily for you, ffmpeg is a low level tool.


back when computers were hard, tips like this were gold. but these days, for for a well trod/documented thing like ffmpeg, asking ChatGPT to make the ffmpeg command you want works really well, eg "give me ffmpeg to make a video from a series of jpegs" and iterate from there.


Please don't let this line of thinking put you (the reader) off sharing tips. Here we now have a thread containing other information we may not have thought to ask anyone/thing about, discussion, history, etc.


besides, chatgpt can only do that because of tips like this...


teach to fish vs giving of fish, I suppose. I did not mean to discourage or stifle discussion, but point out that there's a tool that makes it much easier to get the right incantation of ffmpeg.


That's awesome! FFmpeg is super powerful. I've been using it for video edits too. It’s amazing how such a simple command can handle complex tasks. Thanks for sharing this tip!




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: