Hacker News new | past | comments | ask | show | jobs | submit login

>which seems like very basic functionality but ends up as a low priority for a lot of these

for security based purposes, why would you want to save all of that data that is not changing? you'll just end up fast-forwarding to the interesting bits anyways if you have to go to the footage.




Neither motion or object detection are really that reliable, in any system I've worked with. The norm in commercial systems has long been to record continuously and use motion/object detection/other classifiers to annotate the recording. That gives you the opportunity to search for events, like thefts, that may not have been detected by classification. You also have access to footage well before and after the detected event, which is often absolutely critical to answering useful questions (e.g. how did someone get past the fence?). Common patterns like 10 seconds before/30 seconds after just aren't always sufficient.

Unfortunately consumer devices are almost always cloud-based, where storage but especially upstream bandwidth are much more costly considerations, so recording only on detection has become the norm in the consumer world.

External triggers are also an important feature in commercial systems that a lot of open source projects miss---but Frigate isn't guilty of this one, it can receive triggers by MQTT, which is the same thing I do right now with Blue Iris. That's the big thing that has me optimistic about Frigate going forward. Because motion and object detection are so inconsistent, triggering VMS events based on access control systems and intrusion sensors is often a much more reliable (and even easier to maintain) approach.


One of the niftiest ways I've seen this done was some software I used circa 2000 (I don't remember the name). It would create a variable-rate timelapse by saving a frame every time the image changed more than $x percent, calculated as the sum of differences of pixels from the previous frame, or thereabouts.

If someone was walking across the yard it would save every frame. The movement of the sun would move shadows enough to trigger a new image every few minutes. A bug flying past was small enough that it wouldn't trigger anything. The result was you could get a short video of everything interesting that happened through the day: shadows of trees sliding over the ground, every frame of the car pulling out of the driveway, shadows sliding over the ground some more, cat walks across the yard then lays down, shadows pan around more while the cat sits still, cat gets up and walks away, shadows pan around until the delivery guy comes...

It was an incredibly low-CPU way to see everything that happened without missing anything, and without having to fine-tune the motion detection very much. You just mask out any areas with constant motion, then adjust the slider for how much change triggered the next frame, which would let you adjust how fast the timelapse would go during the boring parts.

I've always wondered why the technique never became widespread.


That sounds incredibly useful, and it doesn't sound like it should be particularly hard to implement in modern systems. Maybe it just needs a term so people can search and advertise it.


Being able to buffer a video signal so that data can be saved from before a triggering event happened is not a new idea. One of Sony's cameras FS700 I think, could only record a few seconds at 240fps, and then stop. But it had a end trigger mode where it would just keep a buffer so you could press the button after the event (think after you see the lightning strike), and then it would just dump the contents of the buffer up to the point you hit stop. Same thing for sports, hit the button at the catch. Much easier than anticipating starting in time.

Essentially, the same concept, just need enough of a buffer to allow for the pre-roll which wouldn't be a lot at the lower bitrate IP data coming from the cameras


Because sometimes these systems don't detect those changes. Continuous recording with object detection and tagging solves this problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: