This article inspired me to have a play around with Darknet and Darkflow - turns out they're pretty easy to get going on an OS X laptop with Python 3 (installed via Homebrew).
Off-topic, but I am curious what you are using for your OS X laptop. I'm assuming from your choice of wording that it isn't Apple hardware and am interested in your experience with what is working well for a non-Apple OS X machine and how happy you are with it.
Well that was a remarkably fast installation & test for something like this. Very fun trying it out directly with my webcam, though rather slow on my laptop CPU, will have to get it installed on a better machine.
For OpenCV classification tutorials this is another great resource to keep playing around with DIY projects. FYI avoid his email list unless you enjoy 3-4 sales emails every week.
This is brilliant given that even Deeplens is around $250, this poor man's set up is a very good DIY kit for anyone who wants to start in this new age of Image processing.
Do you have hummingbirds in your area? You could setup a high fps camera pointed at hummingbird feeder. Hummingbirds aren't afraid to approach a feeder placed just outside the window of a house.
This is an awesome family xmas project. I don't have an old pc around to run YOLO / YOLO Tiny constantly though; can anyone recommend a cheap, suitable server provider for this? AWS EC2?
Don't you have a computer with a decent GPU? I have trained YOLOv2 on a GTX 1050. A night of training (and starting from pre-trained lower layers) yields good results depending on your application.
No, it will identify anything it is trained for. I havent read the article but these things are usually trained for common datasets with 10, 100, 1000 classes of common objects. The 1000 class dataset covers a giant portion of the distribution of objects you'd see, so sort of close to "anything."
(EDIT due to wrongly stating that models run on the Raspberry Pi directly)
The Google Vision Kit will run models on a custom neural processing chip connected to the Raspberry Pi Zero. With the DIY setup from the blog bost, the neural network runs on a "large pc" (potentially with GPU). Depending on the hardware you have at your disposal, you can run more complex (and therefore more powerful) neural networks. At the same time, you'll need wifi set-up and streaming to work. Completely embedded devices are easier to just put in the wild.
In theory, you should be able to use the models from the Vision Kit if you follow their instructions and just put the on a Raspberry Pi directly, and get an additional Movidius compute stick: https://developer.movidius.com/
Inference doesn't run on the RPi Zero. It runs on the VisionBonnet board which has a Movidius VPU tensor co-processor on it. RPi is just for handling the LEDs, buzzers and buttons. For training a model with custom datasets, you are correct - something bigger's needed.
Here's how I got Darkflow working: https://gist.github.com/simonw/0f93bec220be9cf8250533b603bf6...
For Darknet, I just ran "make" as documented here: https://pjreddie.com/darknet/install/ and then followed the instructions on https://pjreddie.com/darknet/yolo/ and https://pjreddie.com/darknet/nightmare/ to try it out.