Hacker News new | past | comments | ask | show | jobs | submit | terrabytes's comments login

I remember seeing them on Yelp a couple of months ago and they had terrible ratings (1 star). It seems to have jumped up to 3 stars now, but a surprisingly large number of them were written in the last couple of weeks. Hmm...


Suppose that data in quantities orders of magnitude greater than that used in this project was scraped from the internet and used to train a model that powers a commercial product.

Is it ethical to sell something that is dependent upon data that people might consider sensitive? Even if they willingly lent their photos to fitness organizations, it's unlikely that they would have predicted that AI would make use of their personal data in the ways it does today.


I'll be honest, this sort of thing didn't cross my mind when I was gathering my small dataset. To me, if someone had willingly let an organization publish their photo on the internet, it was free game.

That said, having considered what you've pointed out about people's expectations when having their data taken not accounting for things like deep learning, I would be hesitant adopting the same attitude if I were working on a product or paid service. Right now I can't lay down a bottom line as to whether I think what you described is ethical, but I do think that the general public should be more informed when it comes to how their data, even old data, could be potentially used.


I have a Surface Book and absolutely love it. The trackpad is something I've had problems with on my previous Windows laptops. The Surface book has a fantastic trackpad, on par with Macbooks.


This is refreshing. I’ve been learning machine learning through Kaggle. Recently and I’m a bit tired with the “tuning hyperparameter” culture. It rewards people that have the pockets to spend on computing power and the time to try every parameter. I’m starting to find problems that don’t have a simple accuracy metric more interesting. It forces me to understand the problem and think in new ways, instead of going down a checklist of optimizations.


I'm also starting to follow people and communities that work with deep learning in new ways. Here are some of my favorites:

[1] http://colah.github.io/

[2] https://iamtrask.github.io/

[3] https://distill.pub

[4] https://experiments.withgoogle.com/ai


You can be a little less brute force if you use something like hyperopt (http://hyperopt.github.io/hyperopt/) or hyperband (https://github.com/zygmuntz/hyperband) for tuning hyperparameters (Bayesian and multi-armed bandit optimization, respectively). If you're more comfortable with R, then caret supports some of these types of techniques as well, and mlr has a model-based optimization (https://github.com/mlr-org/mlrMBO) package as well.

These types of techniques should let you explore the hyperparameter space much more quickly (and cheaply!), but I agree - having money to burn on EC2 (or access to powerful GPUs) will still be a major factor in tuning models.


Ha, it reminds me of what Andrej Karpathy‏ said "Kaggle competitions need some kind of complexity/compute penalty. I imagine I must be at least the millionth person who has said this." It would be interesting to collaborate/compete on more creative tasks and have different metrics for success.

[1] https://twitter.com/karpathy/status/913619934575390720


So true. Another reason to put constraints in Kaggle competition is due to production environment. How many winner models have been used in production? I suspect this number is near zero. High accuracy with a delayed time makes a ML/DL artefact not usable in production, because from users point of view speed is much more valuable than the difference between 97% and 98% in accuracy.


Even the SF city has been unbelievably smoky the last couple of days.


Spot on. I struggled with the mainstream deep learning/machine learning MOOCs. I felt like they were to math heavy. However, I'm struggling on how to learn deep learning. I get polarized advice on it. Some argue that you need a degree or certificates from established MOOCs, others keep recommending me to do Kaggle challenges.

Has anyone managed to land a decent deep learning job without formal CS/machine learning training? How did you approach it?


   I felt like they were to math heavy. However, I'm struggling on how to learn deep learning.

These statements are in contention. You will never really understand machine learning without learning a fair bit of the math.

I do think a lot can be done on the presentation of the material, and certainly don't think much of credentialism.

Honestly, in your shoes I would look for a position where you can learn from people internally, rather than try and qualify yourself first. Even if you do a bunch of online learning and toy problems, you are going to flail about if you don't have a strong mentor in your first position.

What related/supportive skills do you have to bring to a group that is doing ML ?

edit: I should add that you don't really have to understand much these days to integrate (some) ML into a system, but you aren't going to get very far into modeling or understanding issues without some background. You can only get so far with black boxes.


Thanks for your reply. I do agree with you, in general, and have been trying to get myself involved in more ML projects at my current work.

I have around 8 years of professional software experience (C++/C#) and have fiddled around with some rudimentary machine learning for work, like linear regression, k-means clustering, etc. I have a decent idea of how/why they work, but have fallen flat on my face when learning the theory behind more complicated algorithms, e.g. Hessians from Andrew Ng's class. In my experience, many classes tend to focus on a ground up approach. With higher level frameworks like Keras, how necessary is this?


>With higher level frameworks like Keras, how necessary is this?

I would wager that you've heard this line before, but it all depends on the particulars of what you are trying to do. If you want to develop a first principles understanding of what's going on its probably important. It will be less important if you just need to see the empirical performance of n established method on your new dataset.

>but have fallen flat on my face when learning the theory behind more complicated algorithms, e.g. Hessians from Andrew Ng's class

Reading in between the lines, maybe this is a question about Newton's method? One of the general strategies shared between software development and "mathematical" (for lack of a better word) science and engineering is to reduce a complex problem to a known use case. If you've got a grasp on linear regression, take a look at Newton's method in this case. You may be pleasantly surprised to see that the Hessian is constant. This might make it easier to make the connection to relevant topics such as the convergence rate of the method and the connection to the uncertainty in the fit.


This is something I've also struggled with. I find it hard to read deep learning papers because I need to translate each math notation, thus struggling to get the bigger picture. I'm fond of the bottom-up approach, e.g. I started by mastering C and wrote my own libraries. But for deep learning I lean towards the opposite, starting with high-level libraries. When I want to understand the theory I search for simple python code that I can implement from scratch. This way I can understand the logic, without having to understand all the math behind it. I've mostly focused on doing Kaggle type of problems and used MOOCs when I get stuck. I've had little interest from larger companies, but I've managed to get a few offers from startups. Startups often have a couple of people with PhD-level knowledge but are also looking programmers that can code the models.


"Machine Learning Engineer" is a title we're going to see more and more of (and we're already seeing a lot).

Its one thing to know the math and theory to design, train, and tune the algorithm your company needs. But implementing it into production, at scale? That's not the same person.

Ideally, you have Person/Team A, who designs but knows enough about implementation to keep that in mind during their process, and Person/Team B who implements it into the software but knows enough about the design to make it work.


Truly ideally you have someone/team who actually can do both properly. However, very few people can. And if you have one, you may not be able to justify their time on all aspects.

So the compromise is usually as you describe, but you bear the cost of translation issues no matter how you do this. It's worth remembering that it is a compromise.

I think systems like tensorflow are implicitly a recognition of this, allowing lower impedance between the groups.


There's a difference between people who can implement models and those that can create them -- startups could use people who do the former, and many don't actually need the latter.


Very neat tool! Bookmarked.


This is fantastic!


MongoDB has is a pretty good example of building a business around open source software.


Not based on how much money they're losing. It's not a viable business yet, and it probably never will be.


How can you say it's not a viable business because they're losing money if you don't know their expenses? Uber & Lyft, (and for many years Amazon) were/are losing money. So that means nothing unless you know why they're blowing through cash.


Top post of this thread has their net loss: https://news.ycombinator.com/item?id=15308202

They also had to report some more detailed financials when they filed to go public: https://www.cnbc.com/2017/09/21/mongodb-ipo-seeks-to-raise-1...


lol they literally just made their full financials public


The rapid pace of new GPU hardware releases is a deterrent to owning GPUs for sure. The K80s currently on AWS/GCP/Azure aren't the greatest, but they will all soon have Pascal P100s and may be even V100s in the near future. Not to mention TPUs, which will only be available on Google Cloud


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: