Measuring activity / usage is one thing but as they get more advanced it would be exciting to explore detecting machine part failures as well.
As a child my father would relate how his father's time as a mechanic in WWI left him with the skill to listen to a passing car and say what was wrong or wearing out with particular parts of the engine.
Using ML to crack that and we'll have the good part of HAL from 2001 (although in the film the item turned out not to be faulty if I recall correctly!)
I don't have specific examples but I have read of standard manufacturer tools to do audio analysis of diesel engines to diagnose which part(s) are failing. AFAIK this has been a standard type of diagnostic for decades, although perhaps not as widespread or cheap to do as it is presently.
This technology is key to how we will detect fake or doctored videos. Publishers will no longer be able to produce simple live video stream, they will need to publish general purpose sensors to create an audit trail for validating the contents. If you want to fake a video, you will need to forge all of these other signals in a consistent way too.
This is cool! Building a context from many data points is impressive.
I could see this being built into phones for sure. If the phone can detect the context with a small amount of battery it opens the phone to a whole lot of applications.
Absolutely amazing work! If it can be made reliable enough, this may be just a key stepping stone towards actually useful IoT.
That said, I can't wait for a open source/open hardware on-premises alternative - because there's no way in hell I'm running something like this through vendor's cloud. They deserve credit for not sending raw sensor data to the cloud, but vendor lock-in is still vendor lock-in. N-th-order sensors should be products, not services.
Really cool, don't get me wrong. But imo, the biggest achievement here is explaining in non-technical terms how real world inputs can be mapped to features in ML, and how the resulting classifications can be fed into state machines. Obviously, none of this is groundbreaking, but it's explained so clearly that the non-technical person can see the potential in the technology.
This is amazing - and from 2017 no less.
I haven't had time to read the paper but my only question would be how it handles the distinction of multiple sounds in a room triggering multiple event states. For example, if I run the shower and leave the faucet on, then turn off the shower, how does the ML algorithm handle that, etc.
It indicates that you think the work is a meaningful contribution towards (but not a full solution for) the problem named by the paper title. I'm not sure if there's a distinct origin (see https://english.stackexchange.com/questions/188988/what-is-t...), but it's fairly conventional these days.
As a child my father would relate how his father's time as a mechanic in WWI left him with the skill to listen to a passing car and say what was wrong or wearing out with particular parts of the engine.
Using ML to crack that and we'll have the good part of HAL from 2001 (although in the film the item turned out not to be faulty if I recall correctly!)