Hacker News new | past | comments | ask | show | jobs | submit login

Since the article doesn't have any information whatsoever, here's my prediction: by "bringing AI to Raspberry Pi" they mean being able to call their cloud APIs from there.

TensorFlow is not suitable for anything practical on the Pi. You can certainly get it to run there, but CPU vector math on resource constrained devices is not going to be a forte for a framework designed primarily for quickly iterating over models on a GPU workstation or a multi-GPU server. TF very much likes to have a very beefy GPU.




You can already call whichever APIs you like from a Raspberry Pi. This announcement must be about doing something new on the Raspberry Pi (for instance, compiling TensorFlow to ARM if that isn't already supported). Perhaps the use-case is a fleet of Raspberry Pis?


Like a lot of things on the Pi this might just be a PR stunt and be about as exciting as a sanctioned way to call APIs.

Remember when Wolfram came to the Pi? Runs too slow to be of use for anyone but it ships with every copy of raspbian.

>Perhaps the use-case is a fleet of Raspberry Pis?

This would be a waste of money, I know they're cheap but for a space that is working on GPU power anything CPU based isn't cost effective at all.


This may be a dumb question, but…

The Pi does have a GPU. Nothing amazing, but better than the CPU. Given this is public knowledge, why is the GPU being ignored in comments like yours?


A raspberry pi is perfectly capable of running inference at around 10 FPS or more easily.

You obviously wont do any training on the pi, but low power devices have been used for inference for years now. For example here it is made to run on a phone: https://github.com/tensorflow/tensorflow/tree/master/tensorf...


"Inference" of what, exactly? And what do you mean by "easily"? None of this stuff is "easy" at the moment.


You put data into the neural net -> it's called training.

You give it problems and it generates answers -> that's inference.


Inference as in the opposite of training.

An example on inference would be feeding it an image and classifying it with a neural net.

The example I linked to above has inception running, which classifies things you point the camera into 1000 different categories.

It is very easy to set up (I have done it, and it only took a few minutes)


:-) I know what inference is. It's just that the speed of inference very much depends on the model you're doing the forward pass on, and the phrase "inference can be run at 10fps" is non-sensical without also specifying the model.


That is false. A raspberry pi is capable of running inference on 90% of the models out there.


I regret that I have but one face to palm.


I understand by your name that you may be a neural net enthusiast, but I would question how much practical experience you have.

It's a perfectly valid statement to say that a pi will run inference on 90% of the models out there, and I have experience with the same. It would be similar to claiming that (if it could), a pi could run 90% of the games out there.

Once again, I am speaking from practical experience from having implemented tensorflow neural nets on low power devices. And I sincerely get the feeling that although you're enthusiastic, you have no clue what you're talking about.

Rather than making offhand comments like facepalm, I would challenge you to either offer up some evidence to the contrary (you could start by trying to find a tensorflow model that a pi wont run), or spend more of your time doing something more practical than acting like a clueless rabid fanboy.


I believe what general_ai means is that running models is not the issue here, it's about the FPS. NVidia has special GPUs on the TX1 and TK1 for this. Ability to run a model is about having enough memory for it. Ability to apply a model to a real time task is about having the compute, which for most tasks the Pi doesn't have. IIRC Pete Warden had ported some low level ops to the Pi GPU a few years ago, a difficult task. This is why it is likely that what Google has in store is a form of inferenve-bound co-processor resembling their TPU. Many people know what they are talking about on this thread, you just need to pay attention I believe. There's high demand for embedded deep learning at the moment, and I've already shipped several systems for a variety of tasks. At the moment none could live at required speed on the Pi.


Unlikely that it's the TPU. Unless there's a multi billion dollar market for something, Google's official policy is to ignore it.


feelix - what kind of models do you typically run? I've spent a fair amount of time getting Neural Nets to run on Raspberry Pis and other platforms. In my experience it's possible to do inference with most models but often it's intolerably slow. For example the stock inception model that comes as a demo in the tensorflow code base takes about 10 seconds per image to do inference on my Pi 3. What domains are you typically working in? Do you have some tricks to make things run faster?


It is indeed slow (of course, pretty much everything is slow on something like a Pi). But It's still fast enough for some uses. If you can even get one inference every 5 seconds that still has a lot of applications. And that was what I was saying when I said that I don't agree with the assumption that google are working on giving TensorFlow cloud support for all of their support of the Pi. Running locally could have a lot of uses too.

Besides which, they have been implementing things like 8 bit graphs for processing on low power devices. That should result in a large performance increase for these devices. I tried it on mobile and I got decent FPS (I can't remember the exact figure) by using it. https://www.tensorflow.org/versions/r1.0/how_tos/quantizatio...


I saw a demonstration (about 2 years ago) of an mobile ARM powered offline voice recognition platform, faster than Google and more accurate.

There was a side by side with Google's online voice recognition and it out performed in speed and accuracy on a mobile GPU/CPU. Complete with an actual learning system. That is truer to an AI for the Raspberry Pi, not to mention addressing privacy concerns.

If this is a glorified API / Cloud adapter rather than a true AI, what is it really?

edit: found it https://www.youtube.com/watch?v=Fwzs8SvOI3Y


I like the idea of this - doing all the processing/ML client side. I know Apple has started doing this recently (object/scene recognition & search in Photos app), but appreciate that this process occurs on my device rather than on say, Google's servers...

I have a strong feeling that quite a bit of R&D has been going toward Apple's upcoming chip, which will likely have a custom GPU architecture optimized for deep learning (of which Siri will also greatly benefit from) and augmented reality - like the custom HPU in Hololens. Apple's "Lens" wearable will probably pair via W1 with an iPhone which will handle most of the processing. Perhaps they'll even have a custom 3D/depth sensor based on the PrimeSense tech they purchased....

We're on the cusp of consumer AR going mainstream, and it's exciting.


As far as I know since about 2 years ago translation and speech recognition on google phones are done with deep learning systems built into the phone -- a network connection isn't needed. However, I couldn't immediately find a source to verify. Can anyone confirm?


That's not the case. Today's translation and speech recognition systems are considerably larger than even the beefiest phones can sustain. There are some simple OCR models and word (not phrasal) translation systems that run on the device, but not speech recognition.


What are you basing this on? I have a nexus 5 and I just tested it. I turned on airplane mode, and used google translate with speech recognition. Can't quite tell if the translation is phrase based (looks good) and the speech recognition works well.


Bet you're right. I'm sure there's a big market available though for stuff like speech prompting for home automation and image analysis for robotics. Could also be a blended system where there's local object track with remote object identification.


Could be. But I bet this is just a pet project that a couple of engineers do in their spare time, so I wouldn't expect too much from it. I just don't see this as something Larry would care about one iota. And things that Larry doesn't care about tend to eventually die at Google. Case in point: Social.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: