Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: TensorFire (tenso.rs)
564 points by antimatter15 on July 31, 2017 | hide | past | favorite | 83 comments



Hey HN!

We're really excited to finally share this with you all! This is the first of a series of demos that we're working to release this week, and we're hoping you'll keep us to that promise :)

Sorry if it doesn't work on your computer! There's still a few glitches and browser compatibility problems that we need to iron out, and we're collecting some telemetry data with LogRocket (https://logrocket.com/) to help us do so (so you all know what kind of data is being collected).

We'll open source the library under an MIT license once we finish writing up the API docs, and fixing these bugs.


Just wanted to note, I ran the kitten demo in Chrome on my Nexus 6P (Android O Beta) and it worked perfectly.

Extremely impressed. Keep it up!


It's quite unreal. I remember when the paper and initial implementations came out less than 2 years ago, you had to go through this really long setup process that only worked on certain operating system and was a huge fuss. A few services came out online that would do it for you, but they were slow and limited, with huge queues.

Now, as you mention, you can run it in a few seconds on your phone, or in my case, on my Chromebook, right in the browser, with zero installation. Truly amazing.


It really had trouble with the portraits in my experience.


I'm not sure if it's working with my browser or not. It says "Compiling network", then shows a lot of flashing rectangles, then stops and displays a single grey rectangle. Is that what it's supposed to do?


Dude what it did was paint , as in recognizing thats its a picture of 2 cubs and then paint it like the way humans can, its freaking amazing


This looks awesome!

It looks like it (like keras-js) is only for inference (running already-trained models) and not for training. Is this correct?

Are the operations or memory required for training very different?


Yes, you are correct! Training benefits much more from available memory through batching and, since in many cases you only need to train once, it usually makes sense to train on beefy GPUs.

TensorFire is useful in situations where you want to perform inference, but you don't want to ship user-supplied data to your servers, either because you would run out of bandwidth, you would run out of compute power, or your users want to keep their data private.


This is great! Keep up the good work! A link to your github repo would be great. I don't know if it was intentional but I did find your library on npm: https://www.npmjs.com/package/tensorfire


Very nice work, folks. Impressive, and very well-put-together demo. That's the easiest neural style transfer demo I've ever used - and the most fun. (Other than a minute of worrying that my poor 2013 MBP was about to melt down, but that's not your fault. :-)

The download link failed, as others have noted.

Thanks so much for sharing this!


If I upload a 6MB image from my Canon, the site/browser (chrome) crashes. Example images work fine.


Do you have benchmark number like FLOPS compared to GPU / CPU?


Works fine in Chrome on my Google Pixel, Android 7.1.2.


TensorFire was a finalist of AI Grant. Applications for the next batch are open now! Get $2,500 to work on your AI project: https://aigrant.org.

It should only take five minutes or so to apply.


Does it make sense for an active PhD student to apply?


Yes!


Really cool demo. How does this compare to https://github.com/transcranial/keras-js ? Do the authors have a licence in mind?


TensorFire is up to an order of magnitude faster than keras-js because it doesn't have to shuffle data back and forth between the gpu and cpu. Also TensorFire can run on browsers and devices that don't support OES_TEXTURE_FLOAT.

We will probably release it under an MIT license.


I'm really interested in using Smartphones / Mobile devices for inference. Can this work with react-native so that I can build it without a bridge ? I would assume I would create a webview that would load a local website.


How does it compare to WebDNN[0]? It seems like a closer comparison, especially with WebGPU.

It would be good if you had a comparative benchmark on the website.

[0]: https://mil-tokyo.github.io/webdnn/


At the moment WebDNN only runs models on the GPU in Safari Technology Preview, falling back to CPU on all other platforms / browsers: https://mil-tokyo.github.io/webdnn/#compatibility


This is amazing. I can't use GPU Tensorflow (natively) on my Macbook Pro because it doesn't have an NVIDIA graphics card. But I can... in the browser! Honestly didn't see that one coming.


To be clear, you can use it - just without GPU acceleration. The CPU-only build is supported and should work for you. If it's not, please let (me, us) know. Be sure to compile with AVX2 if you're on haswell or later; it helps quite a bit with some models.


Well done! Also important to note this project is one of the 10 recipients of the Spring 2017 AI Grants[1].

[1] https://aigrant.org/#finalists


Could someone explains whats is going on here? What are the steps? Why those colorful artifacts appear before the final result?


It's showing a visualization of all the intermediate activations of the style transfer network. The intermediate pictures are 4D, so they're visualized as a sequence of tiles.

The network being run is defined here https://github.com/lengstrom/fast-style-transfer/blob/master...

This post provides a pretty good explanation of what's happening: https://shafeentejani.github.io/2017-01-03/fast-style-transf...

There's a sequence of 9x9 and 3x3 convolutions that transforms that one big input image into a bunch of smaller images. They're processed by a sequence of residual convolutions. Finally, these tiny tiles are merged together back into a stylized image of the same size as the original input with a few deconvolution operations.


It´s just a demo of an upcoming open source API that allows running deep neural network models on the browser.

Steps (disclaimer: I´m not related to the creators, so this is just what I understand it does)

1.- You upload your image

2.- Select an image to be the origin of the style

3.- Downloading Model: downloads a trained (on style transfering) deep neural net

4.- Colorful artifacts: the model is applied to your image. Probably the artifacts are a visualization of the network weights being transformed to WebGL shaders, or just a simple visualization of the internal hidden steps of the transformation

5.- You get your image with the style applied


Really cool - just want to point out that the flashing rectangles might trigger epilepsy. I'm not sure if they're intended, but on Chrome on Linux I get a bunch of 1 frame brightly colored rectangles flashing before the result. Might want to disable that or put a warning to avoid an accident.

That said, well done, very impressive project!


"running networks in the browser with TensorFire can be faster than running it natively with TensorFlow."

could you elaborate on this statement ?. What kinds of architectures does this hold true for ?.


From the github issue referenced in the FAQ, I think they mean that because TensorFlow only natively supports CUDA, TensorFire may outperform TensorFlow on computers that have non Nvidia GPUs, such as the new MacBook Pro.


Kudos for providing a minimum experience on mobile! I was afraid I would have to wait until I got home :-)


I've played around with doing some computation in WebGL, but it was rather tedious and difficult with my limited knowledge about the topic. It's possible, but you can't even rely on floating point texture to be available on all systems, especially mobile. And for anything more complicated, you probably need to be able to render to floating point textures, which is even more rare than support for plain floating point textures.

This only makes it more impressive when people do cool computational stuff in WebGL, but I'd wish there were some easier ways for non-experts in shader programming to do some calculations in WebGL.


WebGL 2 provides a much nicer base feature set and has been shipping & enabled in browsers since January or so.


Hmm, this seems to lock up & crash my whole browser (Chrome 59, windows, nvidia graphics) when I try to run any of the examples. It gets past Downloading Network, then gets about 5% through Compiling before getting stuck.


Crashed my Firefox as well


Yup Crashed mine as well and my whole machine also hung up on me.


It is with great pleasure that I may present to you, Denali:

https://imgur.com/gallery/ASRQg


Didn't work here, just a bunch of colored squares on Safari, Chrome or Firefox. The latter actually managed to hang my machine. I could ssh to it but kill -9 wouldn't terminate Firefox. Had to force reboot the machine, haven't done that in years.

Amazing and scary, this WebGL thing is.

iMac 2011, latest OS

Edit: worked on MacBookAir


Sounds like some dodgy graphics driver on your old iMac


> old iMac

I object that qualifier, if you don't mind :)


It's not old for a computer, but it _is_ old for an Apple product (which is very unfortunate).


How so? In my experience Apple products outlast their competitor by a long margin. All my Macs have lasted about 6/7 years, meaning, being used everyday with latest software for paid work. Most still boot and work today, but are impractical.

Same with iPhones, usually stop using because lithium batteries have a 4/5 years life span.


For a 2011 product I guess we're arriving in the 6/7 year zone :)


Yes we are. I have another already configured in a tab just waiting for me to have the courage to press buy :)


Worked fine with Safari 10.1.1 for me.


This is really cool! Great work!

I wanted to download the resulting image but got a "Failed - Network" error :(


This is awesome!

Quick question: is the code compiled from js to webgl in browser as well, or do I need to compile beforehand?

I see this as a great way to learn and teach AI without having to bring a large toolchain.

Edit : it seems it is just a runtime for now for Tensorflow models!


Failed when I uploaded an image

>> framebuffer configuration not supported, status = undefined


I get an SSL error SEC_ERROR_UNKNOWN_ISSUER when I try to load this page. I tried removing https from the URL but then it's blocked by OpenDNS with message "This domain is blocked due to a security threat"


I am receiving the same error when going to it on my companies DNS. Apparently we use OpenDNS for some anti-malware and it was flagged by that


Whenever I click on an image in the lower left corner it compiles the kittens. This shouldn't be like this, right? The NN is supposed to take example I'm choosing. (?)

And, as everyone else mentioned already: f*ing wow!


The lower bottom one shows different "filters" (that's surely not the right term) from what I've understood. The upper left corner let you pick different images.


Inference speed looks brilliant. Eager to read the source!

(Also, somehow I had a feeling before even reading that this project was by the people who made Project Naptha etc. Have you written/talked about this anywhere earlier?)


> as fast as CPU TensorFlow on a desktop

> You can learn more about TensorFire and what makes it fast (spoiler: WebGL)

Does this mean that using a GPU in a browser through WebGL yields the same speed than a desktop CPU?


Actually it seems like WebGL is doing it even faster. Which makes sense - machine learning involves a lot of matrix math, which GPUs are made for and CPUs aren't.


This is actually what I'd expect, but the website feels quite misleading. Advertising that a GPU-based approach can outbeat a CPU for neural nets is not a very strong commercial claim :)


Seriously cool. Great work. I did get a glitch every now and then in the rendered output (say 1 out of 5 times) using Safari 10.1.2, MBP touchbar 2016 15", Radeon Pro 460 4096 MB.


Is the end goal to allow people to donate computing power for training? (a la Folding@home or SETI@home except just by visiting a webpage)

If so that's amazingly clever!


I think the goal is to allow people to develop webapps with models built using neural network libraries like keras and tensorflow. This would greatly improve the distribution of applications that are powered by deep learning because you wont have to install a bunch of dependencies in order to use the app.


I guess WebGL is now the "good enough" cross-platform vendor neutral replacement for CUDA.

Tensorflow should add a WebGL backend that runs in NodeJS.


Not quite. Training is not really supported in WebGL. For running a trained model this is cutting edge, and still has varying browser quirks.


Nice demo! I made a shop where you can buy images like these (www.deepartistry.com). Would be happy to see more designs coming in.


>"Could not initialize WebGL, try another browser".

Happening in both Firefox and Chrome on Ubuntu. What exactly am I missing here?


For instance you might run on LLVMPipe or use some very old driver blacklisted in both browsers.

Firefox: about:support

Chrome: chrome://gpu


Amazing work! That was incredibly fast (2013 MBA 13" 1.7 GHz i7, Intel HD Graphics 5000 1536 MB, Chrome 59).


So I could build a model using the Google Detection API then do the actual inference within the browser?


This would be an interesting way to generate a self-updating blog or an automated news site.


Very nice to see webgl gpgpu apps, they have been slow in coming. Any plans for webgl 2?


Lots of potential here. Looking forward to seeing the source once it's released.


Awesome demo. Happy to report it works without a hitch on Firefox/Ubuntu.


Nice, Leonid Afremov is a great choice of input art.


Respect. This pretty much killed the PC I'm on now. Wasn't even able to get to the task manager :D

Windows7, Firefox 54(64bit)


This is amazing. Very cool.


This is really cool!!


Where is the repo?


We're still finishing up a few things (documentation etc) and planning on releasing more stuff tomorrow.

You can also sign up for the mailing list if you'd like us to email you when the repo goes live!


Would be great to port YOLO on your library; always an impressive visual demonstration


Great. Look forward to diving in tomorrow. Thank you for the quick reply!


is there a way to download and play with it?


No GitHub?


I love it


hey, stop it.

i'm running 55.0b13 (64-bit) firefox on windows 10 and clicking on that demo froze the browser, froze my box - hard reboot.

whatever you're doing some of it's wrong. bad wrong.


I could see plausibly assigning blame any combination of the browser (a beta version at that), the OS, or video drivers, but instead you're seriously going to blame a web page for locking up your machine? Is that really how low we want to set that bar?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: