Hacker News new | past | comments | ask | show | jobs | submit login

Yes but you have to precisely stagger the start of expositions of each camera and it's hard to do on consumer hardware.



Um, why would you have to precisely stagger?

I suspect that you have enough information to actually align the videos after the fact.

10 videos at 250FPS would probably distribute sufficiently.


Would it not be hard to interleave the first frame of these videos given different starting times and angles (ignoring camera movement)? It should be easy if the videos have synchronized timestamps, but that might not always be the case.


Any in-frame motion probably allows you to align to frame after the fact. This is existing technology, and gives you timestamp to frame alignment.

If you are reconstructing sound, you can now fuzz the time alignments to give the maximum signal for the maximum time (non-correlation will damp to random noise quickly). This allows you to pairwise reconstruct time alignments.

At that point, you put them all together and run your detailed analysis.

Now, I didn't say this way EASY. :) Or cheap. Or real-time.

Just that it is possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: