Polling for interest here for any neural net enthusiasts:
I'm the author of an open source distributed deep learning framework called deeplearning4j.
I am doing things with it such as sentiment analysis, face recognition, voice recognition,named entity recognition,...
I was considering creating a javascript visualizer for the tool (aka: train on the platform for the heavy lifting) then export the models to javascript for rendering and even in browser prediction.
How interesting would this be if it were implemented?
I'd definitely be interested! As long as we're wishing, I'd also like a Node.js module to do prediction on trained models. Your project looks really interesting, thank you!
Yes for sure. A node module will be in the works as well. My idea is to have this be a server akin to storm. I'll deal with the JVM, I'll just let you guys have fun ;).
Also: would just like to say, thank you for making my web development I've done over the past few years a delight.
You'll notice at the bottom for the training time. That's immense.
The reason I'm encouraging horse power for practical use is to reduce training time to something meaningful for iteration use via distributed means.
I have visualization techniques built in to the lib to help come up with an optimal model so you know it works well, I still need to implement grid search and some other stuff.
My timeline is within the next month or so to have all of this done. I will have the stanford recursive neural tensor nets and the conv nets done here shortly. The next part will be distributed GPUs ;).
I hope to make this as practical as possible for people. The next obvious step after training time is practical and easy to do is wrappers for common tasks.
If you're interested in modern neural networks/deep learning in the browser, Kaparthy's ConvNet.js [1], (MNIST demo [2]) is a better project from both a technical and pedagogical perspective.
I find it interesting that "according to my preferences" is pretty much exactly what you've trained. It _should_ be better according to your preferences, since you're not training it with actual contrast differences, but whatever it is that you prefer.
Depends on how you define "contrast". If you define it as "difference in amount of energy emitted", then neither method is using "contrast". The YIQ method is using a simple perceptual model that assigns different coefficients to red, green and blue because of how bright we perceive each of those colors to be. This has to do with the distribution of photoreceptors in the typical human eye.
The neural network approach goes a step further and accounts for how the brain processes color. There's no reason to consider that notion of "contrast" less valid than the brain-agnostic eye based model. In fact, in the context of readability, the psychovisual notion is far more useful.
One extra factor is that the YIQ method is presumably based on sRGB colours, whereas the neural network is using colours as the screen is actually displaying them (or, as they're being perceived), and most of us aren't using calibrated sRGB monitors.
W3C, in Web Content Accessibility Guidelines (WCAG) 2.0 [1] recommends a different and more complex algorithm [2] for calculating contrast between colours, which could be used instead for choosing white or black text. It would be interesting to see how the two approaches compare.
My personally trained chooser seems to prefer white text just slightly more than the baseline one. I wonder if programmers have a slightly stronger preference for light text in the more borderline situations.
Are you on a Mac?
OS X's font rendering causes light fonts on dark backgrounds to be anti-aliased differently, causing white text to appear bolder, thus easier to read.
It's nice and easy to use. On a hackathon I've used it to translate outputs from an accelerometer and gyroscope into "lightsaber" moves (feeding it a sliding window of half a second of samples).
I am colourblind. I feel like this demo has potential somehow. Let's say we have a "standardised" colour contrast formula on one side, and the one trained for my preferred colour contrast or the other. Could that somehow be used to create a customised Daltoniser?
The YIQ block is a reference algorithm using a simple model of color brightness. It decides if the color is "bright" or "dark" and then makes the text black or white according to that. It's there to demonstrate how the neural net can (if trained right) outperform the simpler model.
I'm the author of an open source distributed deep learning framework called deeplearning4j.
I am doing things with it such as sentiment analysis, face recognition, voice recognition,named entity recognition,...
I was considering creating a javascript visualizer for the tool (aka: train on the platform for the heavy lifting) then export the models to javascript for rendering and even in browser prediction.
How interesting would this be if it were implemented?
Thanks!