Hacker News new | past | comments | ask | show | jobs | submit login
Google planning to bring AI and ML tools to Raspberry Pi (bbc.co.uk)
249 points by Lio on Jan 26, 2017 | hide | past | favorite | 97 comments



I built a robot with tensorflow running on the raspberry pi to do autonomous driving and computer vision: http://www.jtoy.net/portfolio/

I'm working on the next version to make it more useful, but all the technology is not there yet, I want the robot to be able to understand speech and talk back to users. I also want the robot to be able to play games with people. I think the platform has a lot of potential. I want Google to release a low power tensor processing unit made for the pi to make this more useful. This will open up a lot of doors for robot and AI enthusiasts. I'm looking to turn this into a platform, contact me if this is of interest to you.


I'm extremely interested in using autonomous driving and computer vision for a lawnmower. I hate mowing my lawn. I hate mowing my lawn enough to program a robot to mow it for me.


Since you control the machine and your yard, you can use cameras mounted to your house for tracking. You don't want smart, you want dumb and predictable. But good luck making something with a rapidly spinning blade safe while unsupervised.

It's probably cheaper to hire a landscape maintenance company, or a neighborhood teenager.


good luck making something with a rapidly spinning blade safe while unsupervised

Then don't! If it's autonomous it can be out there all the time rather than once a week/whatever, no spinning blade needed. Plan B? Buy a goat.


> Buy a goat

Nah. What we really want and need is a robot that while it mows the lawn, turns it into: its entire energy source, its entire pool of building blocks for daily (nightly?) self-repairs, for reproduction, for producing oh maybe fluffy warm wool and delicious milk/cheese/butter as a side product, and fertilizes the land from the occasional .. discharge resulting from perpetual energy production-consumption and internal cleanups/repairs, finally as it still does wear down as all physical assemblages are wont to with time, it leaves behind highly durable inputs for sturdy clothing and stylish home decoration (horns, hides, leather etc)!

All from a friggin lawn.

Roboruminant 2.0 baby. Need to reinvent mammalian evolution before we can really reinvent the wheel!


Plan C, turn the yard into a garden.


If only goats would refrain from eating everything in a yard. Like flower gardens? Plant a tree sapling? Bushes?


Which is awesome because there is no long lawn on Friday nights. It can be continually mowed.


> You don't want smart, you want dumb and predictable

https://www.youtube.com/watch?v=tMOASdSu9YU


I'd be worried about the rope breaking! Sitting there watching it mow is better than actually mowing myself, but I mostly want the time back. My yard is way too big for my self-propelled electric lawn mower. I don't want to spend 1.5 hours every Saturday anymore.


Get an electric one and use the cord instead of the rope.


Lawn mowers are already safe enough to operate in nothing but your underwear while drinking a beer. I know this because I've seen people do it.

I'd call that safe enough to operate unsupervised in an enclosed yard.


> Lawn mowers are already safe enough to operate in nothing but your underwear

And if in doubt, just wear under-armour instead..


If you hate it so much, I would recommend instead buying one of the autonomous lawn mowers that have been on the market for years now, set it and forget it. (Maybe take it in for the winter.)

I've heard good things about the Husqvarna 450 especially.

Spoiler alert: there is no computer vision involved, still works perfectly even for large lawns; our local university even uses them in campus parks.

http://www.husqvarna.com/us/products/robotic-lawn-mowers/



I would hate to live next door to you and your intelligent, self-aware lawnmower.


How come? Genuine question, I've been living in an apartment for a while.


You'd be super jealous! I'd share, though.


Now I want to watch the movie Lawnmower Man, lol


Oh yeahhhhh..... that chick


To clarify: it's that sound/guy saying "Oh yeah..." in slow-motion, sexy hahaha, greased up lawn mower guy and Pierce Brosnan, no sun laser for this guy.


I realize the point of the article and many of the comments are about using homebrew ML/computer vision, but if you want something quick and easy, commercially available robot lawn mowers already exist. However, they tend to use boundary wires rather than any sort of vision system (some have gps, but I'm not sure if that's supplemental or can be used alone) though I think some have obstacle sensors for safety.

http://www.husqvarna.com/us/products/robotic-lawn-mowers/

http://www.deere.com/en_INT/products/equipment/robotic_mower...?

http://www.honda.co.uk/lawn-and-garden/products/miimo-2015/o...


There seem to be a lot of these on the market already https://www.amazon.com/Roomba-Lawn-Mower/s?ie=UTF8&page=1&rh...


Here is a library that just been submitted : https://news.ycombinator.com/item?id=13498219

Donkey: a self driving library and control platform for small scale DIY vehicles


Do you have more details on your T-bot project available somewhere? Sounds pretty interesting


I just completed the first version last month for an art show. I'm preparing docs to share for it now.


That would be great! I bought a kit of accessories for the arduino/rpi for my son. He's really getting into building simple robots from Lego Technic. It would be great to incorporate some more advanced projects into his list.


just today i was fantasizing about a self-driving autonomous office coffee machine. basically a roomba that brews coffee. perfect for hardcore introverts.


Add a speaker and media player, and you could have DJ Barista!


I always said the hallmark of a great tool is something that can do 50 things in a shitty way. "ALEXA. ALEXA. SIRI! OKAY GOOGLE? Come here self-driving roomba. Play NPR podcasts and brew me a tepid cup of brown Keurig water while engaging me in a rousing game of Go"


"Ok. Now deleting your appointments for this week."


You forgot sex/porn. It has to satisfy all our needs.


Just what I don't need -- being forced to interact with a DJ to get my coffee.

"What do you want?"

"Coffee."

"I CAN'T HEAR YOU! I SAID WHAT DO YOU WANT?!"

"Give me coffee or I will end you."


For the true introvert, I've heard that it's possible to brew your own coffee at home!


I buy instant coffee at ALDI. We have a Keurig, but the pods cost a small fortune, and I drink far too much coffee.


Hey Jason, I'm working on a platform where people can control other people's robots. We're leveraging crowd intelligence as opposed to artificial intelligence, but we'd like to incorporate artificial intelligence soon. Perhaps you'd have some interest to check it out. We're at http://runmyrobot.com.


This site reminds me of https://www.twitch.tv/twitchplayspokemon

This is not what I expected when I read your comment. I was expecting a platform where you upload the parameters of a hardware system, the platform digitizes that system into a virtual sandbox, and people can then write arbitrary programs for that system and showcase them in the sandbox, where the uploader can buy, comment, and rate those programs. Something along the lines of https://openai.com/blog/


I'm wondering why we don't see robotic chips and pretrained models that solve basic tasks such as vision, audio, NLP and behavior. We already have good models for all of them, but no simple way to mix and match. It would be great if this evolved into an open-source bazaar of neural net models and hardware parts.


This is why I've invested in Nvidia and a bit in AMD. My assumption is they will get into this business. Their first product would most likely be some vision processor for self driving cars.


"Google has asked makers to complete a survey about what smart tools would be "most helpful".

And it suggests tools to aid face and emotion recognition, speech-to-text translation, natural language processing and sentiment analysis.

Google has previously developed a range of tools for machine learning, internet of things devices, wearables, robotics and home automation."

That's the meat of it. Google put out a survey - speculation ensues.


> Google put out a survey - speculation ensues.

Yes.

This is the actual announcement with the link to the survey at the bottom:

https://www.raspberrypi.org/blog/google-tools-raspberry-pi/

I play with RPis and I make NNs with TensorFlow, so I took the survey. Pretty standard "tell us what you think" type of thing. If it leads to Google maintaining an official binary release of TF for the RPi, that would be great.


The article conveys almost no information. Why speech synthesis, NLP and so on are called AI? What's special with Raspberry Pi, isn't it a regular general-purpose computer, just of small size? Will it be run locally or on Google servers? Why BBC has articles of so poor quality?


The Pi has exposed IO pins so you can use it to control just about anything that uses electricity.

There are two basic groups of users. People looking for a cheap small computer to use for education, gaming, etc. And those who use the Pi as a controller for other projects.

I'm using one as the controller for an open source coffee maker called Mugsy as well as another start up in the music education space.


Have the link to Mugsy? Sounds pretty cool.


Im currently cleaning up the code, models and parts list for public release. Mailing list subscription is on heymugsy.com. Email is on my profile if you have specific questions.


While speech synthesis is okay, NLP is a problem largely unsolved. We need to improve reasoning, attention and memory, and also to ground concepts in perceptual data.


Wolfram would probably argue that they've already brought AI to the Raspberry Pi.

https://www.wolfram.com/raspberry-pi/

http://www.wolfram.com/language/11/neural-networks/


Kind of an misleading title, nothing is release per se yet.

They've just announced that they might release something ML/AI related in 2017 for the PI.


Wondering if they just mean providing an api that RPis can use or if they are somehow going to get meaningful "AI" running on a RPi?


Perhaps this is just my overall distrust speaking, but as a RPi hobbyist who plans to use solid projects such as OpenCV in the near future, I have an overwhelming fear that Google will release "better" "open-source" projects that will push out maintainers of the projects we know, all with the intention of adding their Analytics(TM) in future updates with the stipulation that not updating your projects introduces you to a Dependency Hell if they work with other projects you need to add.


...and then drop them silently a few years later? You are such a cynic!


Glad I am not the only one who thought the article lacked real info.


Tired of seeing this article when the word AI doesn't appear once in the article and its just an announcement for untold mysteries sometime in the future....


Off topic but I find it strange that Google is not using it's own survey tool for the survey link the article (they are using Qualtrics)


Here's another thing in that same vein: https://www.oreilly.com/learning/how-to-build-a-robot-that-s...

I've been thinking about building a few of these as a class/group project at work.


Hi - I wrote that article - let me know if you build some, I would be excited!


Just saw this - I'm working on getting some internal funding for this idea. We have some robotics and deep learning SIGs in our company so it might just work out.


Sounds cool! I wonder if this will be with TensorFlow. If so, does the RPis videocore have any capability to accelerate such things or not out of interest.


I know from a few years ago that there was some work at doing GPGPU-type work on the Raspberry Pi's video hardware. This is a project that I remember reading about, but didn't look into deeply: https://www.raspberrypi.org/blog/accelerating-fourier-transf...

There's another one that goes into more detail on the "how" of running other algorithms: https://rpiplayground.wordpress.com/tag/raspberry-pi-gpu/

I'm not sure if there are limitations that would keep it from being interesting for TF or not; I don't know enough about it.


Cheers, the FFT example is especially interesting.


It's peobably a promo of their new Tensorflow compiler.


anything to speed up tensorflow and/or openCV either on the videocore or an inexpensive chip would be great.


I don't want Android on my RaspberryPi...


A lot of people do, though. I don't understand why; there are a ton of other Android-capable devices already out there to choose from at similar price and performance levels.

There are a few "Android on the Pi" projects around. They all need work before they're remotely useable. Frankly, I wish there was at least one high-quality project to point people to.



> there are a ton of other Android-capable devices

are you talking about the tons of cheap Android phones?


No, I'm talking about other Pi-like single-board computers and Chromecast-like "Mini-PCs" and "HDMI-sticks".


Are we supposed to care?


It would be a bit ridiculous to install a restrictive and privacy-invasive OS on the RaspberryPI when you can install a full-featured OS.


thank you commentators for saving me before I wasted my time reading the article, this is why I love hacker news


Since the article doesn't have any information whatsoever, here's my prediction: by "bringing AI to Raspberry Pi" they mean being able to call their cloud APIs from there.

TensorFlow is not suitable for anything practical on the Pi. You can certainly get it to run there, but CPU vector math on resource constrained devices is not going to be a forte for a framework designed primarily for quickly iterating over models on a GPU workstation or a multi-GPU server. TF very much likes to have a very beefy GPU.


You can already call whichever APIs you like from a Raspberry Pi. This announcement must be about doing something new on the Raspberry Pi (for instance, compiling TensorFlow to ARM if that isn't already supported). Perhaps the use-case is a fleet of Raspberry Pis?


Like a lot of things on the Pi this might just be a PR stunt and be about as exciting as a sanctioned way to call APIs.

Remember when Wolfram came to the Pi? Runs too slow to be of use for anyone but it ships with every copy of raspbian.

>Perhaps the use-case is a fleet of Raspberry Pis?

This would be a waste of money, I know they're cheap but for a space that is working on GPU power anything CPU based isn't cost effective at all.


This may be a dumb question, but…

The Pi does have a GPU. Nothing amazing, but better than the CPU. Given this is public knowledge, why is the GPU being ignored in comments like yours?


A raspberry pi is perfectly capable of running inference at around 10 FPS or more easily.

You obviously wont do any training on the pi, but low power devices have been used for inference for years now. For example here it is made to run on a phone: https://github.com/tensorflow/tensorflow/tree/master/tensorf...


"Inference" of what, exactly? And what do you mean by "easily"? None of this stuff is "easy" at the moment.


You put data into the neural net -> it's called training.

You give it problems and it generates answers -> that's inference.


Inference as in the opposite of training.

An example on inference would be feeding it an image and classifying it with a neural net.

The example I linked to above has inception running, which classifies things you point the camera into 1000 different categories.

It is very easy to set up (I have done it, and it only took a few minutes)


:-) I know what inference is. It's just that the speed of inference very much depends on the model you're doing the forward pass on, and the phrase "inference can be run at 10fps" is non-sensical without also specifying the model.


That is false. A raspberry pi is capable of running inference on 90% of the models out there.


I regret that I have but one face to palm.


I understand by your name that you may be a neural net enthusiast, but I would question how much practical experience you have.

It's a perfectly valid statement to say that a pi will run inference on 90% of the models out there, and I have experience with the same. It would be similar to claiming that (if it could), a pi could run 90% of the games out there.

Once again, I am speaking from practical experience from having implemented tensorflow neural nets on low power devices. And I sincerely get the feeling that although you're enthusiastic, you have no clue what you're talking about.

Rather than making offhand comments like facepalm, I would challenge you to either offer up some evidence to the contrary (you could start by trying to find a tensorflow model that a pi wont run), or spend more of your time doing something more practical than acting like a clueless rabid fanboy.


I believe what general_ai means is that running models is not the issue here, it's about the FPS. NVidia has special GPUs on the TX1 and TK1 for this. Ability to run a model is about having enough memory for it. Ability to apply a model to a real time task is about having the compute, which for most tasks the Pi doesn't have. IIRC Pete Warden had ported some low level ops to the Pi GPU a few years ago, a difficult task. This is why it is likely that what Google has in store is a form of inferenve-bound co-processor resembling their TPU. Many people know what they are talking about on this thread, you just need to pay attention I believe. There's high demand for embedded deep learning at the moment, and I've already shipped several systems for a variety of tasks. At the moment none could live at required speed on the Pi.


Unlikely that it's the TPU. Unless there's a multi billion dollar market for something, Google's official policy is to ignore it.


feelix - what kind of models do you typically run? I've spent a fair amount of time getting Neural Nets to run on Raspberry Pis and other platforms. In my experience it's possible to do inference with most models but often it's intolerably slow. For example the stock inception model that comes as a demo in the tensorflow code base takes about 10 seconds per image to do inference on my Pi 3. What domains are you typically working in? Do you have some tricks to make things run faster?


It is indeed slow (of course, pretty much everything is slow on something like a Pi). But It's still fast enough for some uses. If you can even get one inference every 5 seconds that still has a lot of applications. And that was what I was saying when I said that I don't agree with the assumption that google are working on giving TensorFlow cloud support for all of their support of the Pi. Running locally could have a lot of uses too.

Besides which, they have been implementing things like 8 bit graphs for processing on low power devices. That should result in a large performance increase for these devices. I tried it on mobile and I got decent FPS (I can't remember the exact figure) by using it. https://www.tensorflow.org/versions/r1.0/how_tos/quantizatio...


I saw a demonstration (about 2 years ago) of an mobile ARM powered offline voice recognition platform, faster than Google and more accurate.

There was a side by side with Google's online voice recognition and it out performed in speed and accuracy on a mobile GPU/CPU. Complete with an actual learning system. That is truer to an AI for the Raspberry Pi, not to mention addressing privacy concerns.

If this is a glorified API / Cloud adapter rather than a true AI, what is it really?

edit: found it https://www.youtube.com/watch?v=Fwzs8SvOI3Y


I like the idea of this - doing all the processing/ML client side. I know Apple has started doing this recently (object/scene recognition & search in Photos app), but appreciate that this process occurs on my device rather than on say, Google's servers...

I have a strong feeling that quite a bit of R&D has been going toward Apple's upcoming chip, which will likely have a custom GPU architecture optimized for deep learning (of which Siri will also greatly benefit from) and augmented reality - like the custom HPU in Hololens. Apple's "Lens" wearable will probably pair via W1 with an iPhone which will handle most of the processing. Perhaps they'll even have a custom 3D/depth sensor based on the PrimeSense tech they purchased....

We're on the cusp of consumer AR going mainstream, and it's exciting.


As far as I know since about 2 years ago translation and speech recognition on google phones are done with deep learning systems built into the phone -- a network connection isn't needed. However, I couldn't immediately find a source to verify. Can anyone confirm?


That's not the case. Today's translation and speech recognition systems are considerably larger than even the beefiest phones can sustain. There are some simple OCR models and word (not phrasal) translation systems that run on the device, but not speech recognition.


What are you basing this on? I have a nexus 5 and I just tested it. I turned on airplane mode, and used google translate with speech recognition. Can't quite tell if the translation is phrase based (looks good) and the speech recognition works well.


Bet you're right. I'm sure there's a big market available though for stuff like speech prompting for home automation and image analysis for robotics. Could also be a blended system where there's local object track with remote object identification.


Could be. But I bet this is just a pet project that a couple of engineers do in their spare time, so I wouldn't expect too much from it. I just don't see this as something Larry would care about one iota. And things that Larry doesn't care about tend to eventually die at Google. Case in point: Social.


Misleading title, Google hasn't brought AI tools for RPI; they are taking a survey on what they should bring to the community. Those who are yet to take survey, be informed it's a long survey. A coffee would be helpful.


Ya this is a very aggressive use of the present tense.


More fake news from BBC.


It'll probably just be a cloud API "look, we can collect more data".

Hopefully it will be an efficient implementation of 1bit-weight NNs like XNOR.ai (which has been pushing on the research on 8bit & 3bit nets).

ARM SIMD (NEON) is not as great as x86's but this can turn out to work & be very cache-efficient!

EDIT: for CPUs, at least.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: