We're really excited to finally share this with you all! This is the first of a series of demos that we're working to release this week, and we're hoping you'll keep us to that promise :)
Sorry if it doesn't work on your computer! There's still a few glitches and browser compatibility problems that we need to iron out, and we're collecting some telemetry data with LogRocket (https://logrocket.com/) to help us do so (so you all know what kind of data is being collected).
We'll open source the library under an MIT license once we finish writing up the API docs, and fixing these bugs.
It's quite unreal. I remember when the paper and initial implementations came out less than 2 years ago, you had to go through this really long setup process that only worked on certain operating system and was a huge fuss. A few services came out online that would do it for you, but they were slow and limited, with huge queues.
Now, as you mention, you can run it in a few seconds on your phone, or in my case, on my Chromebook, right in the browser, with zero installation. Truly amazing.
I'm not sure if it's working with my browser or not. It says "Compiling network", then shows a lot of flashing rectangles, then stops and displays a single grey rectangle. Is that what it's supposed to do?
Yes, you are correct! Training benefits much more from available memory through batching and, since in many cases you only need to train once, it usually makes sense to train on beefy GPUs.
TensorFire is useful in situations where you want to perform inference, but you don't want to ship user-supplied data to your servers, either because you would run out of bandwidth, you would run out of compute power, or your users want to keep their data private.
This is great! Keep up the good work!
A link to your github repo would be great.
I don't know if it was intentional but I did find your library on npm: https://www.npmjs.com/package/tensorfire
Very nice work, folks. Impressive, and very well-put-together demo. That's the easiest neural style transfer demo I've ever used - and the most fun. (Other than a minute of worrying that my poor 2013 MBP was about to melt down, but that's not your fault. :-)
TensorFire is up to an order of magnitude faster than keras-js because it doesn't have to shuffle data back and forth between the gpu and cpu. Also TensorFire can run on browsers and devices that don't support OES_TEXTURE_FLOAT.
I'm really interested in using Smartphones / Mobile devices for inference. Can this work with react-native so that I can build it without a bridge ? I would assume I would create a webview that would load a local website.
This is amazing. I can't use GPU Tensorflow (natively) on my Macbook Pro because it doesn't have an NVIDIA graphics card. But I can... in the browser! Honestly didn't see that one coming.
To be clear, you can use it - just without GPU acceleration. The CPU-only build is supported and should work for you. If it's not, please let (me, us) know. Be sure to compile with AVX2 if you're on haswell or later; it helps quite a bit with some models.
It's showing a visualization of all the intermediate activations of the style transfer network. The intermediate pictures are 4D, so they're visualized as a sequence of tiles.
There's a sequence of 9x9 and 3x3 convolutions that transforms that one big input image into a bunch of smaller images. They're processed by a sequence of residual convolutions. Finally, these tiny tiles are merged together back into a stylized image of the same size as the original input with a few deconvolution operations.
It´s just a demo of an upcoming open source API that allows running deep neural network models on the browser.
Steps (disclaimer: I´m not related to the creators, so this is just what I understand it does)
1.- You upload your image
2.- Select an image to be the origin of the style
3.- Downloading Model: downloads a trained (on style
transfering) deep neural net
4.- Colorful artifacts: the model is applied to your image. Probably the artifacts are a visualization of the network weights being transformed to WebGL shaders, or just a simple visualization of the internal hidden steps of the transformation
Really cool - just want to point out that the flashing rectangles might trigger epilepsy. I'm not sure if they're intended, but on Chrome on Linux I get a bunch of 1 frame brightly colored rectangles flashing before the result. Might want to disable that or put a warning to avoid an accident.
From the github issue referenced in the FAQ, I think they mean that because TensorFlow only natively supports CUDA, TensorFire may outperform TensorFlow on computers that have non Nvidia GPUs, such as the new MacBook Pro.
I've played around with doing some computation in WebGL, but it was rather tedious and difficult with my limited knowledge about the topic. It's possible, but you can't even rely on floating point texture to be available on all systems, especially mobile. And for anything more complicated, you probably need to be able to render to floating point textures, which is even more rare than support for plain floating point textures.
This only makes it more impressive when people do cool computational stuff in WebGL, but I'd wish there were some easier ways for non-experts in shader programming to do some calculations in WebGL.
Hmm, this seems to lock up & crash my whole browser (Chrome 59, windows, nvidia graphics) when I try to run any of the examples. It gets past Downloading Network, then gets about 5% through Compiling before getting stuck.
Didn't work here, just a bunch of colored squares on Safari, Chrome or Firefox. The latter actually managed to hang my machine. I could ssh to it but kill -9 wouldn't terminate Firefox.
Had to force reboot the machine, haven't done that in years.
How so? In my experience Apple products outlast their competitor by a long margin.
All my Macs have lasted about 6/7 years, meaning, being used everyday with latest software for paid work. Most still boot and work today, but are impractical.
Same with iPhones, usually stop using because lithium batteries have a 4/5 years life span.
I get an SSL error SEC_ERROR_UNKNOWN_ISSUER when I try to load this page. I tried removing https from the URL but then it's blocked by OpenDNS with message "This domain is blocked due to a security threat"
Whenever I click on an image in the lower left corner it compiles the kittens. This shouldn't be like this, right? The NN is supposed to take example I'm choosing. (?)
And, as everyone else mentioned already: f*ing wow!
The lower bottom one shows different "filters" (that's surely not the right term) from what I've understood. The upper left corner let you pick different images.
Inference speed looks brilliant. Eager to read the source!
(Also, somehow I had a feeling before even reading that this project was by the people who made Project Naptha etc. Have you written/talked about this anywhere earlier?)
Actually it seems like WebGL is doing it even faster. Which makes sense - machine learning involves a lot of matrix math, which GPUs are made for and CPUs aren't.
This is actually what I'd expect, but the website feels quite misleading. Advertising that a GPU-based approach can outbeat a CPU for neural nets is not a very strong commercial claim :)
Seriously cool. Great work. I did get a glitch every now and then in the rendered output (say 1 out of 5 times) using Safari 10.1.2, MBP touchbar 2016 15", Radeon Pro 460 4096 MB.
I think the goal is to allow people to develop webapps with models built using neural network libraries like keras and tensorflow. This would greatly improve the distribution of applications that are powered by deep learning because you wont have to install a bunch of dependencies in order to use the app.
I could see plausibly assigning blame any combination of the browser (a beta version at that), the OS, or video drivers, but instead you're seriously going to blame a web page for locking up your machine? Is that really how low we want to set that bar?
We're really excited to finally share this with you all! This is the first of a series of demos that we're working to release this week, and we're hoping you'll keep us to that promise :)
Sorry if it doesn't work on your computer! There's still a few glitches and browser compatibility problems that we need to iron out, and we're collecting some telemetry data with LogRocket (https://logrocket.com/) to help us do so (so you all know what kind of data is being collected).
We'll open source the library under an MIT license once we finish writing up the API docs, and fixing these bugs.