Hacker News new | past | comments | ask | show | jobs | submit login

Nor is 120. Even with a 240Hz display, capturing and editing should use 3:2 pulldown for best end user playback. https://en.m.wikipedia.org/wiki/Three-two_pull_down



Check your math, son.


My point is none of these refresh rates are multiples of 23.976.


So set your monitor to 119.88 instead of 120.

Theres a reason why monitors give you both x/1000 and x/1001 framerates.


I didn't know this was possible, but in such a case why not set a 60Hz display to 48Hz, 59.95Hz, etc.?


48hz is supported on few monitors, not often (but can be hit by gsync/freesync/VRR); 24hz is sometimes supported (often seen in TVs).

And you can set it to 59.94, nobody is stopping you.

This entire comment chain started with, essentially, what is the LCM of 60 and 24 (or 59.94 and 23.976), and its 120 (or 119.88).


I think your point is that 1/1000 frames would need to be 6:5 at 120Hz?

I'd be more concerned with the automatic black frame insertion most LCDs do to increase contrast.


Theoretically maybe, but 3:2 pulldown is used for playing 23.976 fps video at 29.97Hz. Since this is HN maybe someone with more knowledge about how video editors and modern TVs typically handle this can jump in here. Regardless, I think this would actually have more impact on the end user viewing experience than the job of video editing. The time between frames is tremendous from the standpoint of a video editor, and editing is usually (traditionally) done by feel: press button when the cut should happen, mark it, then arrange the timeline accordingly. Lag aside, frame rate and which frame is actually on the screen at that time matters much less than whether the software knows which frame should be on the screen at that time. Hopefully that makes sense. For this reason, resolution and color accuracy will still take priority when it comes to display hardware.


I worked on display drivers and TCONs, but mostly for mobile/laptop rather than TVs/Monitors. I'd be fairly shocked to see the defects you're describing coming directly from within a device, but going through multiple translations PCIe>eDP>TB/DP/HDMI... especially if they're not well tested or badly negotiated is certainly a possibility. I wouldn't trust most external connections or monitors for video editing, unless they're specifically tested.

Note that 1/1000 is glitch every 40 seconds so it's quite visible to an "eagle eye". I'll ask.


The answer from a Pro was Genlock so you match the 23.97. "It doesn't matter if you drop a frame every once in a while, you're going to see it a dozen times... as long as it's not the same dropped frame!"


The worst part of incorrect refresh rates for me is on panning footage and you get those janky blocky tears in the image.

>The time between frames is tremendous from the standpoint of a video editor,

This sounds like something I've heard from people with a head full of fun stuff talking about the space between the notes. There have bee times where that absolutely makes sense, but I'm at a loss on your time between frames.


> The worst part of incorrect refresh rates for me is on panning footage and you get those janky blocky tears in the image.

That sounds a lot more like rolling shutter artifacts than 3:2 pulldown. What kind of camera are you using? Are you shooting with a CMOS sensor?

https://en.m.wikipedia.org/wiki/Rolling_shutter

> This sounds like something I've heard from people with a head full of fun stuff talking about the space between the notes. There have bee times where that absolutely makes sense, but I'm at a loss on your time between frames.

Haha, fair enough. If you ever feel like diving in yourself, I passionately recommend In the Blink of an Eye by Walter Murch.

https://en.m.wikipedia.org/wiki/In_the_Blink_of_an_Eye_(Murc...


It has nothing to do with 3:2 pulldown. It is all about refresh rates of the monitor. I've shot for years on global shutter (specifically Sony F55), so it absolutely 100% was not a rolling shutter issue either. The same footage can be viewed on another monitor and the tearing issue is not present.

Edit to match your edit: "The book suggests editors prioritize emotion over the pure technicalities of editing."

This totally depends on the content and level of production. I've edited content from properly staffed productions with script notes with circle takes and all that stuff. It's always fun to stack up the various takes to see how the director feels about the takes from the day of the shoot and seeing it edited context. It's also fun to see the actor's variations from take to take.

On shoots with barely enough crew so the camera op is also the boom op, it's basically all feel from the editor.


> The same footage can be viewed on another monitor and the tearing issue is not present.

This is what I was hoping someone would chime in about. I have never looked into whether it would be handled differently, but I would not trade a higher resolution display regardless. Maybe it could potentially influence where I cut in certain rare situations, but sounds unlikely.


Basing edits because of how footage looks on a monitor with a non-compatible refresh rate just sounds like one of those problems that strikes me at my core especially when someone acknowledges it but does it anyways. Does it matter in the end, probably not, but it still goes against everything. It’s one of those things of seeing people “get away” with things in life blissfully unawares while someone that is well versed and well studied can’t catch a break.


I hope you get sleep at night. When I worked as a video editor years ago, I unfortunately had a boss who I needed to please and this kind of rabbit hole obsession would have added a significant barrier to doing so. More resolution, on the other hand, made me straightforwardly much more productive.


This doesn’t make any sense. Why would you want to use 3:2 pulldown unless your display is interlaced, which AFAIK will never be the case for any modern display?

And even if you did use it, it doesn’t do anything to help with the extra 1000/1001 factor, so what is the point?


3:2 pull-down works for converting 24fps to 60fps. It doesn’t matter if the target f is fields or frames.


Yes, it does. 3:2 pulldown produces interlaced 60 fields/s. On a digital display, it must be deinterlaced, and the only "correct" way to do that is to remove the pulldown, producing 24 fps. If you just deinterlace it as if it were originally 60i, you'll just end up with something similar to 24p converted to 30p by repeating 1 of every 4 frames (with a loss in resolution to boot). So for digital displays, 3:2 pulldown is pointless at best, destructive at worst.


The film industry should stop using 24fps, it's a waste of people's time and energy. At least they should move to 25fps which is what most of the world uses as a frame rate, if not 30fps.

For the stupid North American non-integer frame rates, just change the playback speed by a fraction and get on with life. Or drop 1000/1001 frames for live, people won't notice.


here here. we finally have a standard starting with UHD that does not included interlacing. finally. hallelujah the chorus of angels are singing.


> Why would you want to use 3:2 pulldown unless your display is interlaced

At this point, the only great reason is that it's an industry standard, but that alone is more than enough reason to still do it, evidenced by the fact that so many people still do it.


who in the world wants to use a 2:3 pulldown pattern on a progressive monitor? the majority of my career has been in properly removing 2:3 pulldown, the other portion was back in the bad-ol-days of putting it in.


> who in the world wants to use a 2:3 pulldown pattern on a progressive monitor?

At least everyone tasked with editing 3:2 pulldown footage for 3:2 pulldown distribution, which is most of the video editors in North America the last time I checked.


Who wants 3:2 content for distribution? No streaming platform wants 3:2, and they all want the footage delivered as progressive scan. Some will say things like "native frame rate", but I find that a bit misleading. There are plenty of television shows shot on film at 24fps, telecined to 30000/1001 with 2:3 introduced, then place graphic content rendered at 30p. The term "do least harm" gets used so that taking this content to 24000/1001 so the majority of the content (that shot on film is clean) while leaving graphics potentially jumpy (unless proper frame conversion with an o-flow type of conversion that nobody really wants to pay for).

Edit: also, any editor worth their salt will take the telecined content back to progressive for editing. if they then need to deliver like it's 2005 to an interlaced format, they would export the final edit to 30000/1001 with a continuous 2:3 cadence. only editors unfamiliar with proper techniques would edit the way you suggest.


Admittedly, I haven't worked as a video editor since 2011 and never edited telecined footage, but my understanding from friends is that little had changed. Specifically I have heard them complaining about it. That streaming platforms specifically want progressive scan makes plenty of sense to me of course, but conflicts with what I've heard for whatever reason.


I can’t say as I fault them as I’ve spoken with teachers that don’t know how to handle telecine content. I also know plenty of editors that have no idea the purpose of a waveform/vectorscope. Again, neither did some of those instructors.

For people never having to work with this kind of content, it makes sense. I’d equate it to modern programmers not knowing Assembly, but can write apps that perform adequately. There’s plenty of content shot on modern equipment delivered to non-broadcast platforms that will never need to know the hows/whys old timers did what they did




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: