Hacker News new | past | comments | ask | show | jobs | submit login
An Upgrade to SyntaxNet, New Models and a Parsing Competition (googleblog.com)
325 points by liviosoares on March 16, 2017 | hide | past | favorite | 85 comments



I've been fighting Tensorflow in the last couple of days to try an application on it, never before have I seen such a convoluted build process and a maze of dependencies. The best manual on getting tensorflow with CUDA support up and running is here:

http://www.nvidia.com/object/gpu-accelerated-applications-te...

But it is a little bit out of date when it comes to version numbers.

If you're going to try TensorBox (https://github.com/TensorBox/TensorBox) it will get a bit harder still because of conflicts and build issues with specific versions of TensorFlow.

There has to be an easier way to distribute a package.

That said, all this is super interesting and Google really moved the needle by opensourcing TensorFlow and other ML packages.


I strongly recommend you use Keras if you are new to Tensorflow. The API abstractions will make testing your network ideas a breeze. It won't save you from the hell of building TF, but should save you loads of time with implementation and testing.


Much appreciated, thank you!


Keras is amazing. I build tf on my raspberry pi and it was fairly easy. I'm guessing nvidia GPU's with cuDNN will get quite messy.

I would really like to see some competition in GPU space. Nvidia monopolizing deeplearning rigs may not be the best thing for the future.


Plug for Mathematica, which after its installed you can do deep learning on in one or two lines, with GPU support on all three platforms with no setup. Very concise. Getting fairly competitive in features with other high level declarative frameworks as of 11.1 (which was just released today). Very nice visualizations thanks to being in Mathematica. The language is of course closed source, paid software. Many universities have site licenses, so there is a large built-in audience who can use it in courses etc 'for free', home licenses are comparable to photoshop or whatever.

See http://reference.wolfram.com/language/guide/NeuralNetworks.h..., also look at Examples > Applications under http://reference.wolfram.com/language/ref/NetTrain.html for some worked examples. Fun example of live visualization during training (very easy to do, will get even easier in future versions): https://twitter.com/taliesinb/status/839013689613254656


Two times in my life, I've gotten deeply excited about Mathematica. The first time I wanted to use it for economics homework as an undergraduate. (Don't worry, I did it on paper first.) The second time, I wanted to use it for machine learning, especially NLP. Mainly the knowledge base Mathematica hooks into is what drew me.

The problem in the end is that the customizability of Mathematica ends right where things get interesting. If you want to show people cool little examples, Mathematica is clean and fast, but you can't build anything serious with it. And by "serious", I guess I mean something with few enough constraints to have an identity of its own, rather than being "a thing you can do with Mathematica."

Another limitation is the data input. Someone needs to rethink it.

I could be wrong. I actually want to be wrong, because of the simplicity and power of Mathematica in its scope.

Programming languages/platforms are network goods. IMHO, Mathematica has tried to swim against this fact and has failed.


> If you want to show people cool little examples, Mathematica is clean and fast, but you can't build anything serious with it.

Mathematica, which is a serious project, is largely written in Mathematica. Wolfram|Alpha, another large project, is built in Mathematica. Outside the company, https://emeraldcloudlab.com/ for example has built their platform on Mathematica.

> And by "serious", I guess I mean something with few enough constraints to have an identity of its own, rather than being "a thing you can do with Mathematica."

For research, a "thing you can do with Mathematica" is often what you want. But other than that you can put things you build in the cloud via APIFunctions (similar to AWS Lambda functions), or call out to them via wolframscript, or talk to kernels directly via MathLink or LibraryLink, or over sockets via ChannelListen.

> Another limitation is the data input. Someone needs to rethink it seriously.

That's vague, but I imagine you mean importers. Certain built-in importers aren't as good as they should be, like CSV and XLS are memory hogs and die on relatively small amounts of data (at least they used to, I haven't checked recently). The HDF5 importer is now pretty good, and for large datasets it's a good choice for scientific computing anyway.

> Programming languages/platforms are network goods. IMHO, Mathematica has tried to swim against this fact and has failed.

No arguments there. I think it would be great if we could open source at least parts of it, because no doubt new life would be breathed into cobwebby parts of the codebase and various pet peeves fixed. But Mathematica still dominates the computer algebra space despite being closed source, and probably will continue to do so for a while.


> For research, a "thing you can do with Mathematica" is often what you want. But other than that you can put things you build in the cloud via APIFunctions (similar to AWS Lambda functions), or call out to them via wolframscript, or talk to kernels directly via MathLink or LibraryLink, or over sockets via ChannelListen.

For research, closed source should be a dealbreaker.


By "research" I suppose you were being overly specific to mean publicly funded research?


Good points. I will admit that what I said: "you can't build anything serious with it" is too extreme. I'm not sure how to count Mathematica and Wolfram|Alpha, though. Still, Emerald Cloud Lab clearly gets credit.


Did y'all fix GTX 10x series compatibility with this release? Mathematica is some wonderful software, I admit -- but it was disappointing to try the NN examples and suddenly see nothing worked properly on my new GTX 1080 and I had to CPU train everything. :( I guess stuff like that's inevitable though so I'm not too upset.

Guess the only way is to upgrade to 11.1 and find out, but since you're apparently involved -- might as well ask...


Yup, it works. Sorry... GTX 10x required CUDA Toolkit 8.0 which was RC at the time we shipped 11.0.


I agree that Tensorflow has a mess of dependencies. The docker image worked for me: https://www.tensorflow.org/install/install_linux#InstallingD... Worked with the GPU even, on my Linux Mint gaming desktop.


That's a good idea. Will definitely try that.


sudo pip install tensorflow works fine as of the recent v1.0, and works out of the box for CPU training.

The annoying thing for GPU training is handling the cudNN dependency, which Google's guides are annoyingly lacking.


> The annoying thing for GPU training is handling the cudNN dependency, which Google's guides are annoyingly lacking.

I've found a nice workaround for that one that does not require registration with nvidia so you can automate it:

    ML_REPO_PKG=nvidia-machine-learning-repo-ubuntu1404_4.0-2_amd64.deb
    wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64/${ML_REPO_PKG} -O /tmp/${ML_REPO_PKG}
    sudo dpkg -i /tmp/${ML_REPO_PKG}
    rm -f /tmp/${ML_REPO_PKG}
    CUDA_REPO_PKG=cuda-repo-ubuntu1404_7.5-18_amd64.deb
    wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/${CUDA_REPO_PKG} -O /tmp/${CUDA_REPO_PKG}
    sudo dpkg -i /tmp/${CUDA_REPO_PKG}
    rm -f /tmp/${CUDA_REPO_PKG}
    apt-get update
    apt-get install libcudnn5-dev
    apt-get install libcudnn5


I'm assuming from the filename that that only works if you're on Ubuntu 14.04, right?


Erm, yes, sorry! I should have noted that, but presumably there are other files for other platforms?

(these were the ones that worked for me).

16.04 is there as well:

http://developer.download.nvidia.com/compute/machine-learnin...


It is not Google's fault. You need to agree the terms and conditions of Nvidia in order to download cuDNN. So you have to do it manually.


It is not a good idea to compile TensorFlow by your own unless you really need it (for example for TensorFlow serving). Python packages are the way to go.


I disagree - I think you are well-served to compile on your own unless you know you don't need it, e.g. if you are just trying it out to learn how it works.

The standard build uses a "least common denominator" Intel instruction set (SSE4), but the odds are extremely high that the machine on which you're running tensorflow supports 4.2. Building from source allows you to use the most up-to-date instruction set (the default configuration script at https://www.tensorflow.org will do it automatically).

I've seen dramatic (>50%) reductions in processing time on test scripts by building from source. Note that those tests were built primarily for my own education, not for benchmarking. But the speedup was so dramatic that I couldn't help but notice and probe a little deeper. YMMV depending on the particular application, whether you are using GPU computing (I am not), etc.


I don't quite understand how you performance-obsessed enough to care about the compiler options they use – but not run Tensorflow on a GPU. Even mobile GPUs offer a 5x speedup at least.


As I said above, if you are constrained to run on a machine without a supported GPU then every little bit helps.


>The standard build uses a "least common denominator" Intel instruction set (SSE4)

TensorFlow is not meant for the CPU. If you want to do something serious you have to move to the GPU, which it is at least 8 times faster than your optimized CPU with an average GPU.


For those of us waiting on OpenCL support because we don't have an Nvidia GPU, CPU will have to do for now.


Buy an Nvidia GPU with 8Gb of ram and you are good to go by now. Or you can use amazon instances or google cloud. For toy things I wouldn't bother to suffer the hell that is to compile it (I do it for my job and it is a pain to suffer its unstability everytime I need to compile it)


This is a bit unfortunate, in a real sense. I mean, I already build enough software, so I'm not sad on missing out. But here's the thing: TensorFlow actually installed great on Windows and it took less than 10 minutes to get running, once I had Python3 installed, even with GPU support. Even worked awesome in VS Code, out of the box, with autocomplete in the python mode. Even a baby like me got started easily.

But it's a bit disappointing to hear that the build system is something of a nightmare, if I ever wanted to contribute myself. There's always plenty of things to help with, I don't care about the cutting edge of machine learning (I'm happy to submit docs, examples, etc)... Then again, the TF people can't just nerd around on their build system, for dorks like me to maybe write some patches every once in a while. Always great to make it easier, though.


For me the issue with such things is not so much the hassle as much as it is long term stability of an environment that I come to depend on. If I can't reproduce it from archived sources then there is a good chance that at some point in the future my stuff will suddenly and in-explicably stop working after some minor system update.

And python has a huge problem with this anyway. (Or, to put it probably more accurately, I have a huge problem with python in this way, historically my python code has had a relatively short shelf life compared to my C code (or even my PHP code)).


Hi, I am working on Bazel, the build tool used by Tensorflow.

Could you please share in more details the problems you encountered with Bazel?

If you have specific questions, I encourage you to post on StackOverflow with the bazel and tensorflow tags, both the Bazel and Tensorflow teams are monitoring and answering questions there.


> But it's a bit disappointing to hear that the build system is something of a nightmare

I know... I am building it for TensorFlow Serving and it is pain in the ass. I will help but I have no idea how to do it, only find workarounds to compile specific commits.


I have to because I'm trying to use TensorBox which does not play well with the regular version of tensorflow that you can get pre-compiled.

See this issue:

https://github.com/TensorBox/TensorBox/issues/100

and

https://github.com/TensorBox/TensorBox/issues/102

So then we're full-circle and installing from pip which doesn't work :(

sigh.

Anyway, I'll get it to work, somehow.


Those issues are related with TensorFlow <1.0. It wasn't an stable release. Try 1.0.1.


Agreed it's more hassle than it should be, especially if building from source. However the main installation hassle is CUDA and CuDNN I think, not Tensorflow itself


CUDA is always fun to install but NVIDIA does a reasonably good job as long as you remember to remove the system installed nvidia stuff beforehand (otherwise you'll be in a world of pain with a computer that will likely either hang somewhere during the boot process or that will have two conflicting sets of NVIDIA code on it).

For CuDNN I've found a good solution, see below.

Tensorflow itself worked ok once I figured out what all the dependencies were, even so I have not been able to get it to use CUDA yet (it only works with the CPU), which is strange because other CUDA stuff works fine.


Given how many areas NLP can be applied to, I can only imagine all of these future internal project proposals where someone has to explain to some C-suite exec how they are going to revolutionize the business with Parsey McParseface. Or better yet, when they have to budget a big upgrade to the "DRAGNN based ParseySaurus". Fun times ahead.


Sounds like a way to troll other organizations and competitors, imagine the conversations:

-(Eng) We need to switch to this new NLP framework

- (VP) Ok, why? Which one is it?

- (Eng) Huh, it's called Parsey McParseface, developed by ...

- (VP) WTF? Don't waste my time with jokes, go build your own

- (Eng) But ...

- (VP) Meeting's over.


there's probably a cooler internal name for it.


See also: spaCy, which is an open-source NLP framework that has some integration with Keras as well: https://news.ycombinator.com/item?id=13874787

...and apparently will release a major version update today. Ouch.


I don't think spaCy will be hurting any time soon. When SyntaxNet was first released last year, Matthew Honnibal had a good writeup [0] of how spaCy vastly outperforms with speed while keeping reasonable accuracy:

>On the time-honoured benchmark for this task, Parsey McParseface achieves over 94% accuracy, at around 600 words per second. On the same task, spaCy achieves 92.4%, at around 15,000 words per second. The extra accuracy might not sound like much, but for applications, it's likely to be pretty significant.

If spaCy is able to increment the accuracy and maintain the large performance gap, it'll still be my go-to NLP framework!

[0] https://explosion.ai/blog/syntaxnet-in-context


Why ouch? :). It's not like there's a zero-sum game here. It's great to see more things being released, so the ecosystem can continue to improve.

I do wish SyntaxNet were a bit easier to use. A lot of people have asked for SyntaxNet as a backend for spaCy, and I'd love to be using it in a training ensemble. When I tried this last year, I had a lot of trouble getting it to work as a library. I spent days trying to pass it text in memory from Python, and it seemed like I would have to write a new C++ tensorflow op. Has anyone gotten this to work yet?


There is https://github.com/livingbio/syntaxnet_wrapper that does the job fairly well (I also spent days trying to be able to pass to SyntaxNet different textes without having to reload the model). Warning: installation is a bit difficult.


This is super awesome! Thank you for mentioning about them because their announcement did not show up on my feed on HN. Even being somewhat comfortable with Tensorflow, I always find some of Google's announcements kind of overwhelming and convoluted. There's something about packages like spaCy that seems comforting and less intimating.


I think spaCy uses perceptrons (essentially a shallow neural network) so it should be faster. Accuracy is pretty similar with SyntaxNet at least on the training data but I'm guessing SyntaxNet works better on long range dependencies.

I wonder if the spaCy update will go deep :)


The current update uses the linear model. I've also been working on neural network models, and more generally, better integration into deep learning workflows. That'll be the 2.0 release.

I've learned a lot while doing the neural network models, though. The 1.7 model takes advantage of this by having a more sophisticated optimizer. Specifically, I use an online L1 penalty and the Adam optimizer with averaged parameters. The L1 penalty allows control of size/accuracy trade-off.

This means we're finally shipping a small model: 50mb in total, compared to the current 1gb. The small model makes about 15-20% more errors.


spaCy is great, and I find it much easier to use than Tensorflow for NLP. Looking forward to the new release today


"Python 3 support is not available yet." [1]. It's only supported in Python 2.7, Why?

[1] https://github.com/tensorflow/models/tree/master/syntaxnet


Probably because Google still mostly uses Python 2.7 internally.


Yikes! Not being to run on Py3k is a deal breaker for me.


It's definitely an unfortunate situation. The community has been coalescing around Python 3 in the last couple years, but Google is obviously encumbered by all its legacy Python 2.7 code. Their SyntaxNet library still doesn't have Python 3 support a year after release. I'm wondering what their long-term plans are given 2.7 EOL in 2020.


Only for the models, the core works with python 3. I tweaked the models to use them in python 3, it is things like 'xrange and range'.


Check in a python 3 version of the code? The world will thank you!


This looks amazing. I'm especially curious how well it will work at identifying Gene/chemical nomenclature since it is fairly consistent like English spelling. For named entity recognition in biomedical text this could be really useful!


We changed the title from "Google open-sources Tensorflow-based framework for NLP", which appears misleading, given that it happened last May: https://news.ycombinator.com/item?id=11686029.

On HN the idea is to rewrite titles only to make them less misleading (or less baity). Please see https://news.ycombinator.com/newsguidelines.html.


dang, sorry if this seemed misleading. In my humble opinion, the blog title does not do full justice to the new release, primarily since it carries a new framework within SyntaxNet:

https://github.com/tensorflow/models/blob/master/syntaxnet/g...

This new DRAGNN framework is what I thought the folks here would want to know. Perhaps I should have linked to the github page, rather than the blog announcement.


Ah, I see. Probably a post pointing to that framework would have been a better idea. It never fails to surprise me, but discussion tends to be directed almost entirely by what's in a submission title.

For the same reason, it probably doesn't make sense to change the current thread to point to that Github page now, since that would orphan the existing discussion.


Very interesting release.

The bit about guessing the part of speech, stem, etc. for previously unseen words should (I think) make it much more useful in contexts that succumb to neologizing, verbing nouns, nouning verbs, and so on (such as business writing, technical writing, academic papers, science fiction & fantasy, slang, etc.).

I wonder how well it would do at parsing something that seems deliberately impenetrable, like TimeCube rants, or postmodern literary criticism.


It's much more useful in all contexts - every problem/task has a 100 words that are very common and important there while being rare and unknown in general; the problem is that for every niche that's different 100 terms.


Right, except in terms of neologizing I was referring to contexts where many individual texts are trying to establish a new term. So if you are trying to parse Science Fiction texts, yes there are "terms of art" that don't appear outside of that field (eg. "blaster"), but often there are terms that don't appear anywhere else, not even in other works by the same author.

Other pathological cases are business books trying to coin a term or twist existing words into new meanings (eg. "cloud"), verbing nouns (incentivize), nouning verbs (likes, learnings), and so on.


For those of us who aren't developers but maybe more aptly called "hackers" (cause we hack stuff together even though we're operating out of our league, sometimes we get stuff to work). I am wondering, is there a even higher level guide to using Tensor Flow. I am currently growing Sweet Peas in my office in enclosed containers that automanage environment, nutrition and water. I have the capaability to log a lot of data from a lot of sensors, including images. I have _no idea_ how I would even get started using Tensor Flow, but it would be cool if I could run experiments on environmental conditions and find optimal conditions for this sweet pea cultivar. Maybe I'm talking nonsense. Let me ask a more basic question, how might one log and create data for use with Tensor Flow. How might Tensor Flow be applied to robotic botanical situations?


The short answer is to skip TensorFlow entirely and use/learn Keras for a high-level overview; then you can learn top-down if you need to use/look at TF code directly.

Another HN thread has good tutorials for simple uses of Tensorflow: https://news.ycombinator.com/item?id=13464496

However, NNs are optimal for text/image data as they can learn the features. If your data features are already known, you don't necessarily need to use Tensorflow/Keras at all, and you'll have a easier time using conventional techniques like linear/logistic regression and xgboost.


sklearn has this flowchart for what machine learning method to use: http://scikit-learn.org/stable/_static/ml_map.png


The flowchart predates NNs/GBTs which are Swiss-army knives, which is another reason why using either of them is sometimes considered cheating.


NNs are much older than this chart. They aren't terribly good at problems like this because they tend to overfit more than other methods. They need lots of data to generalize well. They only really excel when the data has regular structure that can be exploited by weight sharing (like CNNs to images or RNNs to time series.)


I agree with the previous post that you should focus on Keras rather than Tensorflow. Understanding Tensorflow is a great skill to have because you get a more appreciative and deeper understanding of the models when you dig deeper. But for most application, especially for a fun side project, Keras should be perfect.

I recommend http://course.fast.ai/ to learn more about the applications of neural networks and how to apply neural networks quickly through python.


Thanks.


This is definitely a game changer!

It's a very interesting research carried out by Google's research team and I believe this will be especially beneficial for future speech translation algorithms that would bring us a whole new, fresh experience with the way we converse with Alexa, Google Home, Siri, and many more.

If you need to install TensorFlow onto your Windows 10 computer then here's a great guide which I have followed quiet a few times. :)

http://saintlad.com/install-tensorflow-on-windows/


Hoping this will quickly make into someone's home grown self-hosted version of Alexa.

Alexa, turn the lights on in the kitchen.

Alexa, turn on the kitchen light.

Alexa, light up the kitchen.

Should all accomplish the same task using this framework.


I've been slowly working on my own simple home "Alexa" using mostly CMUSphinx for the voice detection. Honestly my most successful methods involved the least amount of complex NLP.

Just simply treating the sentence as a bag of words and looking for "on" or "off" or "change" (and their synonyms) and the presence of known smart objects works extremely well. I could say "Hey Marvin, turn on the lights and TV", or "Hey Marvin, turn the lights and TV on", or even "Hey Marvin, on make lights and TV."

(It's named Marvin it after the android from The Hitchhiker's Guide, my eventual goal is to have it reply with snarky/depressed remarks).

Adding 30 seconds of "memory" of the last state requested also made it seem a million times smarter and turns requests into a conversation rather than a string of commands. If it finds a mentioned smart object with no state mentioned, it assume the previous one.

"Hey Marvin, turn on the lights." lights turn on "The TV too." tv turns on

The downside to this approach is I would be showing it off to friends, and it could mis trigger. "Marvin turn off the lights." lights turn off "That's so cool, so it controls your TV, too?" TV turns off But it was mostly not an issue in real usage.

Ultimately I've got the project on hold for now because I can't find a decent, non-commercial way of converting voice to text. I'd really rather not send my audio out to Amazon/Google/MS/IBM. Not just because of privacy, but cost and "coolness" factor (I want as much as possible processed locally and open-source).

CMUSphinx's detection was mostly very bad. I couldn't even do complex NLP if I wanted because it picks up broken/garbled sentences. I currently build a "most likely" sentence by looping through sphinx's 20 best interpretations of the sentence and grabbing all the words that are likely to be commands or smart objects. I tried setting up Kaldi, but didn't get it working after a weekend and haven't tried again since. I don't really know any other options to use aside from CMUSphone, Kaldi, or a butt SaaS.

I've wanted to add a text messaging UI layer to it. Maybe I'll use that as an excuse to try playing with ParseySaurus.


> I've got the project on hold for now because I can't find a decent, non-commercial way of converting voice to text. I'd really rather not send my audio out to Amazon/Google/MS/IBM

Same concern here... so my voice->text method is via android's google voice - forced to offline mode. The offline mode is surprisingly good.

Re mis triggers... I also have opencv running on the same android. It only activates the voice recognition when I am actually looking directly at the android device (an old phone).


> text method is via android's google voice - forced to offline mode. The offline mode is surprisingly good.

I actually tried this at one point with a wall-mounted tablet before trying Sphinx. It is surprisingly good for offline, probably the best offline I've tried yet outside of dedicated software like Dragon. But it doesn't meet my open criteria, so I'm hoping to find something better.

I'll most likely give up on the requirements of it needing to be local and open, and use Sphinx for hotword detection to send the audio out to AWS for processing.

> Re mis triggers... I also have opencv running on the same android. It only activates the voice recognition when I am actually looking directly at the android device (an old phone).

That's an awesome idea :) I haven't gotten around to playing with anything vision based yet. But I've thought of 'simple' projects like that, which would add a lot to the perceived intelligence. Figuring out the number of people in a room would be another useful idea I think. The AI could enter a guest mode when there is more than 1 person in the room, or when it detects faces that aren't mine, or something similar.


> doesn't meet my open criteria

With the leaps and bounds being made in ml these days it can't be long before magnitudes better open source voice recognition becomes available. I gave Sphinx a try but it was horribly disappointing.

For me, the combination of google voice (offline) and Ivona voice (Amy) is pretty damn good for my android/python/arduino based home AI.


Sounds interesting, do you have a writeup or some other details somewhere? (How do you force android voice recognition to work offline? Just block the phone from the internet?)


Kaldi is not a point-and-click solution, it's a toolkit to develop your own speech recognition system. That said, it makes it incredibly easy if you know what you're doing, as it brings all the necessary tools and even provides some data to train your models (see the associated resources at http://openslr.org/). It's performance is state of the art.


This was recently mentioned on HN, but I haven't really looked into it (apparently requires training your own models, but provides prepared scripts to do that for some common datasets): https://github.com/mozilla/DeepSpeech


Must have slipped past me last time it was posted on HN. Thanks for sharing! I'm going to add this to my list of things to try next time I'm inspired to work on this project again.


It was only mentioned in a comment. I just checked, since it never had a submission on its own I submitted it now.


> Alexa, light up the kitchen

May also be interpreted as:

Alexa, set fire to the kitchen


> Alexa, light up the kitchen.

Alexa turns the gas stove directly to 'high', and waits.


I see your point, but the tasks that you have listed (and more difficult variants) can be easily handled using rule-based systems.


If I recall correctly, most of Alexa is simple rule-based systems.


I've been rummaging through the docs on DRAGONN but can't seem to find proper installation/run instructions. There is a Google Cloud installation, but I want to just run on my laptop for now against the pre-trained files. Can't seem to get started with DRAGONN.


Thank you thank you Ivan Bogatyy for fixing the docker run instructions. I think it was missing the docker image name before! :)


> and to allow neural-network architectures to be created dynamically during processing of a sentence or document.

Oh lord, is this the spark that lights the google skynet powder keg


Not yet, you'd need to apply the same approach to agentive (decisions on how to act) problems as opposed to classification tasks; then you'd have the spark that lights the google skynet powder keg.


I have tried to run Syntaxnet as library but I found it very difficult (lot of dependencies).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: