Very cool, but that video works by using normal supervised learning (to detect hand gestures) while mine detects the direction your face is pointing in using pose estimation.
How hard would it be to use these tools to detect someone touching their face and throwing a chime to help remind them not to? It could run in the background all day for laptop/webcam users. #covid etc.
Edit: I didn't train the pose estimation model myself, it's taken from a tensorflow.js example: https://github.com/tensorflow/tfjs-models/tree/master/faceme...