Theoretically maybe, but 3:2 pulldown is used for playing 23.976 fps video at 29.97Hz. Since this is HN maybe someone with more knowledge about how video editors and modern TVs typically handle this can jump in here. Regardless, I think this would actually have more impact on the end user viewing experience than the job of video editing. The time between frames is tremendous from the standpoint of a video editor, and editing is usually (traditionally) done by feel: press button when the cut should happen, mark it, then arrange the timeline accordingly. Lag aside, frame rate and which frame is actually on the screen at that time matters much less than whether the software knows which frame should be on the screen at that time. Hopefully that makes sense. For this reason, resolution and color accuracy will still take priority when it comes to display hardware.
I worked on display drivers and TCONs, but mostly for mobile/laptop rather than TVs/Monitors. I'd be fairly shocked to see the defects you're describing coming directly from within a device, but going through multiple translations PCIe>eDP>TB/DP/HDMI... especially if they're not well tested or badly negotiated is certainly a possibility. I wouldn't trust most external connections or monitors for video editing, unless they're specifically tested.
Note that 1/1000 is glitch every 40 seconds so it's quite visible to an "eagle eye". I'll ask.
The answer from a Pro was Genlock so you match the 23.97.
"It doesn't matter if you drop a frame every once in a while, you're going to see it a dozen times... as long as it's not the same dropped frame!"
The worst part of incorrect refresh rates for me is on panning footage and you get those janky blocky tears in the image.
>The time between frames is tremendous from the standpoint of a video editor,
This sounds like something I've heard from people with a head full of fun stuff talking about the space between the notes. There have bee times where that absolutely makes sense, but I'm at a loss on your time between frames.
> This sounds like something I've heard from people with a head full of fun stuff talking about the space between the notes. There have bee times where that absolutely makes sense, but I'm at a loss on your time between frames.
Haha, fair enough. If you ever feel like diving in yourself, I passionately recommend In the Blink of an Eye by Walter Murch.
It has nothing to do with 3:2 pulldown. It is all about refresh rates of the monitor. I've shot for years on global shutter (specifically Sony F55), so it absolutely 100% was not a rolling shutter issue either. The same footage can be viewed on another monitor and the tearing issue is not present.
Edit to match your edit: "The book suggests editors prioritize emotion over the pure technicalities of editing."
This totally depends on the content and level of production. I've edited content from properly staffed productions with script notes with circle takes and all that stuff. It's always fun to stack up the various takes to see how the director feels about the takes from the day of the shoot and seeing it edited context. It's also fun to see the actor's variations from take to take.
On shoots with barely enough crew so the camera op is also the boom op, it's basically all feel from the editor.
> The same footage can be viewed on another monitor and the tearing issue is not present.
This is what I was hoping someone would chime in about. I have never looked into whether it would be handled differently, but I would not trade a higher resolution display regardless. Maybe it could potentially influence where I cut in certain rare situations, but sounds unlikely.
Basing edits because of how footage looks on a monitor with a non-compatible refresh rate just sounds like one of those problems that strikes me at my core especially when someone acknowledges it but does it anyways. Does it matter in the end, probably not, but it still goes against everything. It’s one of those things of seeing people “get away” with things in life blissfully unawares while someone that is well versed and well studied can’t catch a break.
I hope you get sleep at night. When I worked as a video editor years ago, I unfortunately had a boss who I needed to please and this kind of rabbit hole obsession would have added a significant barrier to doing so. More resolution, on the other hand, made me straightforwardly much more productive.
This doesn’t make any sense. Why would you want to use 3:2 pulldown unless your display is interlaced, which AFAIK will never be the case for any modern display?
And even if you did use it, it doesn’t do anything to help with the extra 1000/1001 factor, so what is the point?
Yes, it does. 3:2 pulldown produces interlaced 60 fields/s. On a digital display, it must be deinterlaced, and the only "correct" way to do that is to remove the pulldown, producing 24 fps. If you just deinterlace it as if it were originally 60i, you'll just end up with something similar to 24p converted to 30p by repeating 1 of every 4 frames (with a loss in resolution to boot). So for digital displays, 3:2 pulldown is pointless at best, destructive at worst.
The film industry should stop using 24fps, it's a waste of people's time and energy. At least they should move to 25fps which is what most of the world uses as a frame rate, if not 30fps.
For the stupid North American non-integer frame rates, just change the playback speed by a fraction and get on with life. Or drop 1000/1001 frames for live, people won't notice.
> Why would you want to use 3:2 pulldown unless your display is interlaced
At this point, the only great reason is that it's an industry standard, but that alone is more than enough reason to still do it, evidenced by the fact that so many people still do it.
who in the world wants to use a 2:3 pulldown pattern on a progressive monitor? the majority of my career has been in properly removing 2:3 pulldown, the other portion was back in the bad-ol-days of putting it in.
> who in the world wants to use a 2:3 pulldown pattern on a progressive monitor?
At least everyone tasked with editing 3:2 pulldown footage for 3:2 pulldown distribution, which is most of the video editors in North America the last time I checked.
Who wants 3:2 content for distribution? No streaming platform wants 3:2, and they all want the footage delivered as progressive scan. Some will say things like "native frame rate", but I find that a bit misleading. There are plenty of television shows shot on film at 24fps, telecined to 30000/1001 with 2:3 introduced, then place graphic content rendered at 30p. The term "do least harm" gets used so that taking this content to 24000/1001 so the majority of the content (that shot on film is clean) while leaving graphics potentially jumpy (unless proper frame conversion with an o-flow type of conversion that nobody really wants to pay for).
Edit: also, any editor worth their salt will take the telecined content back to progressive for editing. if they then need to deliver like it's 2005 to an interlaced format, they would export the final edit to 30000/1001 with a continuous 2:3 cadence. only editors unfamiliar with proper techniques would edit the way you suggest.
Admittedly, I haven't worked as a video editor since 2011 and never edited telecined footage, but my understanding from friends is that little had changed. Specifically I have heard them complaining about it. That streaming platforms specifically want progressive scan makes plenty of sense to me of course, but conflicts with what I've heard for whatever reason.
I can’t say as I fault them as I’ve spoken with teachers that don’t know how to handle telecine content. I also know plenty of editors that have no idea the purpose of a waveform/vectorscope. Again, neither did some of those instructors.
For people never having to work with this kind of content, it makes sense. I’d equate it to modern programmers not knowing Assembly, but can write apps that perform adequately. There’s plenty of content shot on modern equipment delivered to non-broadcast platforms that will never need to know the hows/whys old timers did what they did
Obviously this improves interoperability and the handling of nulls and strings. My naïve understanding is that polars columns are immutable because that makes multiprocessing faster/easier. I’m assuming pandas will not change their api to make columns immutable, so they won’t be targeting multiprocessing like polars?
I think if anything pandas may get additional vectorized operations, but from what I understand Polars is almost completely Rust code under the hood, which makes multiprocessing much easier compared to dealing with all the extensions and marshaling of data back and forth between Python and C/C++ that pandas does.
I wonder what would happen legally if someone wearing one of these items gets run over by a car in self driving mode. I presume the pedestrian is still in the clear?
Haha, the next layer will be a sweater for pedestrians that tricks computer vision systems into thinking you're a stop sign. Probably wouldn't be that hard even.
I'm also not sure what would happen if a car with an automatic speed limiter at 40 in dense traffic, it suddenly spots a 20 sign on an adjacent road and drops a gear to slow down, and gets rear-ended.
Technically the car behind needs to keep the distance safe, but also it's been functionally brake-checked by the speed-limited car, and without any brake lights coming on.
Bit of a non-sequitur, but the easiest way to implement such a speed limiter is to make it such that the car won't accelerate when over the limited speed (and either warn the driver or reduce power gradually so it slows down gently) without manual intervention, rather than jamming on the brakes
That's how my old car does it, but the new one will downshift if it wants to slow down a lot (either if you reduce the cruise speed a lot, or it imagines it saw a lower speed limit sign). Seems dangerous.
Almost nothing in “spatial audio” is actually mastered for spatial audio. The fake surround filter absolutely destroys stereo mixes, it’s really despicable how this is being pushed.
Well it's mastered for 5.1. True there aren't usually Atmos objects, but sometimes there are, but also there don't usually need to be. Are you taking issue with "5.1" being called spatial?
I couldn't disagree more about the "fake surround filter" though -- I love surround sound on my AirPods Pro, even when "spatialized" from stereo. It simply makes music and movies/TV so much clearer to listen to. Everything becomes more distinct and intelligible.
I understand how people rail against the "purity" of the original stereo mix's "intentions", but the reality is that when you listen to music on speakers, the amount of reflection and absorption in any room is already destroying that "purity". Spatial audio filters aren't "destroying" the audio anymore than speakers in a room already do -- the difference is that they're increasing clarity rather than muddying it all up.
I can't ever imagine going back to listening to flat stereo again, where the sound on headphones feels stuck inside of my skull instead of coming from outside.
Headphone fatigue is a real thing, coming from the fact that our brains aren't meant to process audio without all of the associated spatial cues. (Sound isn't supposed to feel like it's emanating from inside our skulls.) Modern surround filters do an awfully good job at restoring those cues. No more headphone fatigue.
Not the parent, but find myself agreeing with both of you, in a way. No matter if it's 5.1 or "proper Atmos", those mixes sound fine - or maybe even good, depending on your preferences - on headphones. I also have a fake surround upmix on an audio interface that I occasionally use.
But I almost always detest how these mixes sound on my actual 5.1.2 setup. The surround channels mostly consist of a bit of reverb that adds nothing to the experience in my opinion. In a car where there are physically separate channels, I'm not optimistic for the result.
A properly mixed Binaural sound sounds so good even on Stereo channels, lookup virtual barber shop on youtube, it always gives me goosebumps.
Also afaik tracks are never mixed for stereo/mono. They start as multi channel, if ogg ever gains I think it has multi channel support. It must bring some spatial audio into masses, with smaller downloads and quality streaming.
Also its very good for cinematic audio in videos.
I must be the only person who seems to think Apple's DSP stuff falls into massive uncanny valley (it makes the music feel lifeless and flat and boring). Never mind I don't understand what the point of hearing music from some other point in the digitally manufactured "room" is, it's certainly not what the artist intended.
I remember finding this article several years ago when the watermark was really quite obvious and offensive. Especially obvious in piano music, since the tone of a piano doesn’t naturally waiver. Since then, they’ve either dialed it back or removed it, because I don’t experience this anymore.
Yes: I was unwilling to subscribe to Google Play Music (as then was) because something like half of their classical music collection—including all of Deutsche Grammophon— was unlistenable due to watermarking. It was so bad that I actually reported it internally as bug against the GPM player before I learned that it was a watermark thanks to Matt's article. Much griping about it persuaded someone on the GPM team to get fresh, supposedly-fixed recordings from UMG but to no avail: the new audio seemed to be as bad as the old.
That was a few years ago, and I lost track of what happened after that but evidently UMG actually fixed the problem at some point because YT Music seems fine now, and I no longer notice the problem on other streaming services that were also formerly affected by it.
Hey! I actually handled the data coordination for the BraTS data sets. We used a combination of the best algorithms from prior years of the BraTS competition to pre-segment the data sets, and then we had experts (fully-trained neuroradiologists) make manual corrections, which were then further reviewed by my team before finalization.
The three tissue types of interest are fairly easy to identify in most cases. Edema is bright on the FLAIR sequence, enhancing tumor is bright on T1 post-contrast and dark on pre-contrast, and necrosis is relatively dark on T1 pre- and post-contrast while also being surrounded by enhancing tumor. These rules hold true in most cases, so it’s really just a matter of having the algorithm find these simple patterns. The challenge in doing this manually is the amount of time it takes to create a really high quality 3D segmentation. It’s painful and very tough to do with just a mouse and 3 orthogonal planes to work with.
Oh wow, the joys of the HN-community. Do you know the take of the neuroradiologists on this type of modelling? Are the models in a challenge like this already usable for enhanced decision-making by the experts?
With the segmentations these models create, you can create reports that quantitatively describe the changes in different tumor tissues. That info can be useful for guiding chemotherapy and radiotherapy decisions.
Currently, the accepted practice is to report these changes qualitatively without using segmentations (the way it’s been done for years). While the segmentations created by the models are probably good enough to use in practice today, the logistical challenges of integrating the model with the clinical workflow impede its actual use.
Sure, you could manually export your brain MR to run the model, but that’s a pain to do when you’re reading ~25 brain MR cases/day.
Thanks Satyam! That's glass half full if I read it correctly. Working models that need to be integrated into a workflow. What kind of firms are we talking about that could do that?
(I know nothing of this tbh, except I once had a demo of a radiologist back when the gamma knife was introduced, have a colleague who became a radiotherapist and a friend who works in ML for Philips medical.)
It’s definitely possible to do, and many companies are able to do it (eg RapidAI). I’m also not an expert in this specific problem, but there are HIPAA/privacy/security concerns that need to be addressed with the radiology department and IT team. Once those have been handled, there is some kind of API available to integrate the model.
> All the imaging datasets have been segmented manually, by one to four raters, following the same annotation protocol, and their annotations were approved by experienced board-certified neuro-radiologists. Annotations comprise the GD-enhancing tumor (ET — label 4), the peritumoral edematous/invaded tissue (ED — label 2), and the necrotic tumor core (NCR — label 1), as described both in the BraTS 2012-2013 TMI paper [1] and in the latest BraTS summarizing paper [2]. The ground truth data were created after their pre-processing, i.e., co-registered to the same anatomical template, interpolated to the same resolution (1 mm^3) and skull-stripped.