Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Where is AI/ML actually adding value at your company?
385 points by mkrecny on Dec 12, 2016 | hide | past | favorite | 195 comments



I work in manufacturing. We have an acoustic microscope that scans parts with the goal of identifying internal defects (typically particulate trapped in epoxy bonds). It's pretty hard to define what size/shape/position/number of particles is worthy of failing the device. Our final product test can tell us what product is "good" and "bad" based on electrical measurements, but that test can't be applied at the stage of assembly where we care to identify the defect.

I recently demonstrated a really simple bagged-decision tree model that "predicts" if the scanned part will go on to fail at downstream testing with ~95% certainty. I honestly don't have a whole lot of background in the realm of ML so it's entirely possible that I'm one of those dreaded types that are applying principles without full understanding of them (and yes I do actually feel quite guilty about it).

The results speak for themselves though - $1M/year scrap cost avoided (if the model is approved for production use) in just being able to tell earlier in the line when something has gone wrong. That's on one product, in one factory, in one company that has over 100 factories world-wide.

The experience has prompted me to go back to school to learn this stuff more formally. There is immense value to be found (or rather, waste to be avoided) using ML in complex manufacturing/supply-chain environments.


Ok there are some warning signs here.

First, bagged decision trees are a little hard to interpret; what is the advantage of a bagged model vs the plain trees? Are you using a simple majority vote for combination? What are the variances between the different bootstraps?

Second - what do you mean by 95% ? Do you mean that out of 99999 good parts 4999 are thrown away? and one bad one is picked out as bad ?

Third - what is this telling you about your process? Do you have a theory that has evolved from the stats that tells you why parts are failing? This is the real test for me.. If the ML is telling you where it is going wrong (even if it's unavoidable/too expensive to solve) then you've got something real.

Unfortunately my concern would be that as it stands.. you might find that in production your classifier doesn't perform as well as it did in test... My worry has been generated by the fact that this same thing has happened to me !

Several times...


It sounds like the OP is scanning for internal defects in bonds via impurities being trapped in there. These occur relatively randomly and there's some balancing point where it's just not worth trying to make the production line cleaner vs binning parts that fail some QA criteria. I do similar things with castings, where you simply just get certain voids and porosity in the steel when cast and either you can spend a tonne of money trying to eliminate them or you can spend less money testing the parts and binning those that aren't up to par.

I'd hazard to guess that the 95% is the reduction in how many parts made it through the first test only to be caught later at the more expensive stage. So instead of binning 100 parts a month at that second stage, they now bin 5 parts a month and catch way more early on.

It sounds like the OP is using ML to identify flaws that simply just occur due to imperfections in the manufacturing process. That's life, it happens. You can know that they will occur without necessarily being able to prevent them because maybe there's some dust or other particulates in the air that deposit into the resin occasionally, or maybe the resin begins to cure and leaves hard spots that form bond flaws. There's heaps of possible reasons. It sounds more like the ML is doing classification of 'this too much of a flaw in a local zone' vs 'this has some flaws but it's still good enough to pass', which is how we operate with casting defects. For example, castings have these things called SCRATA comparitor plates, where you literally look at an 'example' tactile plate, look at your cast item, then mentally decide on a purely qualtative aspect which grade of plate it matches. Here [1] are some bad black and white photos of the plates.

[1] http://www.iron-foundry.com/ASTM-A802-steel-castings-surface...


This is pretty spot on. We know why the defects happen and why they cause downstream test failures, but we lack the ability to prevent (all of) them.

To clarify on that 95% value because it is admittedly really vague: That's actually a 95% correct prediction rate. So far we get ~2.5% false-positives and ~2.5% false-negatives. 2.5% of the parts evaluated will be incorrectly allowed to continue and will subsequently fail downstream testing (no big deal). More importantly, 2.5% of parts evaluated will be wrongly identified as scrap by the model and tossed, but this still works out to be a massive cost savings because a lot of expensive material/labor is committed to the device before the downstream test.


I hope you get a decent chunk of those cost savings as a reward for your effort, great job.


D'Angelo Barksdale: Nigga, please. The man who invented them things? Just some sad-ass down at the basement at McDonald's, thinkin' up some shit to make some money for the real players.

Malik 'Poot' Carr: Naw, man, that ain't right.

D'Angelo Barksdale: Fuck "right." It ain't about right, it's about money. Now you think Ronald McDonald gonna go down in that basement and say, "Hey, Mista Nugget, you the bomb. We sellin' chicken faster than you can tear the bone out. So I'm gonna write my clowny-ass name on this fat-ass check for you"?

Wallace: Shit.

D'Angelo Barksdale: Man, the nigga who invented them things still workin' in the basement for regular wage, thinkin' up some shit to make the fries taste better or some shit like that. Believe.

[pause]

Wallace: Still had the idea, though.


Can't believe I'm seeing the Wire referenced on HN


Oh, a Wire reference on HN... my life is one step closer to completion.


> 2.5% of parts evaluated will be wrongly identified as scrap by the model and tossed

2.5% of what, though? if only 1 in a million parts are actually bad, you're still tossing many more good parts than bad parts.


Correct - as mentioned the cost savings still work out. The defect rate is around 30% (Nowhere close to 1 in a million).


This sounds like a pretty standard use of ML to me. No need to feel guilty, this stuff just isn't very difficult from a user's perspective, especially if you use the right libraries. It helps if you maintain a good bookkeeping of your experiments, so you have a good picture of what works and what doesn't.

By the way, control engineering for industry used to be very difficult (but is paid very well), and requires knowledge of systems theory, differential equations, and physics. But with the advent of ML, I suspect that might change; things may get a lot easier.


Care to elaborate?


I disagree with the above, but I think I can shed light on what they might mean. Usually, control theory (which is used in most manufacturing processes) requires quite a bit of background knowledge on the processes at hand along with fairly powerful (mathematical/physical) tools to both approximate and model such processes, along with creating systems that use these models to perform the desired task.

I believe that the parent post means that with current simulation-based tools and large amounts of data generated from manufacturing processes, one can work directly with abstract machine learning models instead of creating physical models or approximations thereof---thus being able to dispose of the mathematical baggage of optimization/control theory and work with a black-box, general approach.

I disagree since we have very few guarantees about machine learning algorithms relative to well-known control approximations with good bounds; additionally, I think it's quite dangerous to be toying with such models without extensive testing in industrial processes, which, to my knowledge is rarely done in most settings by experts, much less people only recently coming into the field. Conversely, you're forced to consistently think about these things in control theory, which I believe, makes it harder to screw up since the models are also highly interpretable and can be understood by people

This is definitely not the case in high-dimensional models: what is the 3rd edge connected to the 15th node in your 23rd layer of that 500-layer deep net mean? Is it screwing us over? Do we have optimality guarantees?


This is brilliant, would love to read a full write up on it. I hope you get a big raise.


If not, perhaps you should consider starting a company to develop this tech for others. Drop me a line :-)


Surely it would be guarded as a trade secret, as it usually happens in large companies.


Yup - To do a proper write-up that would actually be interesting to read would require divulging IP.


Nice! You might like these links too. "Machine Learning Meets Economics", uses manufacturing quality as an example.

http://blog.mldb.ai/blog/posts/2016/01/ml-meets-economics/ http://blog.mldb.ai/blog/posts/2016/04/ml-meets-economics2/


This is awesome - thank you! I went through a similar exercise described in your link in evaluating the utility of the tool I described above. This is a nice write up of the logic.

In my case the % occurrence of the defect was very high and the False-Positive cost is also very high so my tool could provide value without being too stellar of a model.


Disclaimer: I have no experience implementing any kind of ML.

How easy will it be to update your model if/when the downstream process changes?

At a previous job we had a process that relied heavily on visual inspection from employees. I often considered applying ML to certain inspection steps, but always figured it would be most useful for e.g. final inspection to avoid having to update the models frequently as the processes were updated.


That's an interesting concern that I hadn't considered if I'm understanding you correctly. I'm imagining you could have a situation where a downstream process change helps mitigate the effect of the upstream defect. In that situation your measure of what constitutes good and bad parts will need to change in the ML model.

I think I'm somewhat lucky in that with my product downstream processes are unlikely to change in a significant enough way to warrant "retraining" the model, but I guess that's probably the only way to handle that - retrain in the event of a significant process change. Our product stays fairly stable once it releases to production and the nature of the downstream processes is that they would have very little effect on the perceived severity of the defect at the final electrical test.


That's pretty awesome. What are you doing academically to lean this? Went somewhere for a masters?


I just started in UC Berkeley's MIDS program.

My only two misgivings about the program thus far: It is 1) pretty expensive and 2) geared towards working professionals rather than academics, but my employer is helping pay for a good chunk of the degree and I'm more interested in acquiring the skills and tools to go solve problems in industry as opposed to doing research.

Otherwise it has been great thus far. The program was attractive to me because it is somewhat marketed towards those that may not have a software background, but have problems in their industry that could benefit from a "proper" data science treatment. I've been referring to my application of the principles as "Six Sigma for the 21st Century" with managers/directors. I think the vast majority of HN would groan at that term, but it helps communicate the potential value to someone who has no technical background with software whatsoever (think old school manufacturing/operations types): Process improvement for environments with many variables that have practically unknowable inter-dependencies (as is the case with the project described in my original comment).


Interesting perspective. I work in Manufacturing and have created similar models in the past and I was in the MIDS program but I dropped out. Like you it was too expensive and had other misgivings as well.


Care to elaborate at all on those additional misgivings? One thing I could see is that the material might not be very mind-blowing to someone who already has a software background.


Hi, I work at a manufacturing startup on pretty much the same exact problem (reducing scrape rates and downtime). I'd love to pick your brain if you have a moment to chat :) my email is mykola@oden.io


So this is a prototype and not really added value yet.


I don't understand why this comment is unpopular since the GP is phrased in such a fashion that you only notice that they talking hypothetically if you read it carefully.

I don't think there's anything wrong the GP's achievement or post (it's all interesting stuff) but if something has not yet been implemented, it's worth nothing since there is "many a slip 'tween cup and the lip"


Totally fair - I developed the tool for a product that won't be released until early next year so the cost savings are estimates based on expected production volumes. Its performance in a lower volume development environment has been consistent, however.


I didn't downvote the parent, but I also don't consider it to be civil (which is requested in the HN Guidelines). There are a number of ways to point out that this is not yet in production, and many of them don't require the dismissive tone his comment struck.


Also in manufacturing, would be interested in hearing more about this for detecting early on before NCR's are raised down the line.


The entire product I built over the last year can be reduced to basic statistics (e.g. ratios, probabilities) but because of the hype train we build "models" and "predict" certain outcomes over a data set.

One of the products the company I work for sells more or less attempts to find duplicate entries in a large, unclean data set with "machine learning."

The value added isn't in the use of ML techniques itself, it's in the hype train that fills the Valley these days: our customers see "Data Science product" and don't get that it's really basic predictive analytics under the hood. I'm not sure the product would actually sell as well as it does without that labeling.

To clarify: the company I work for actually uses ML. I actually work on the data science team at my company. My opinion is that we don't actually need to do these things, as our products are possible to create without the sophistication of even the basic techniques, but that battle was lost before I joined.


It's interesting to me that with all the ML hype, it's still not clear what constitutes ML. A basic k-means or naive Bayes approach will show up in ML textbooks, but those aren't clearly different from "use some statistics to make a prediction".

There's an interesting group of marginal approaches that have existed as-is for years, but have increasingly focused their branding on machine learning as its profile has risen.


> but those aren't clearly different from "use some statistics to make a prediction"

You can reduce 90% of ML to this. Even neural networks are based on statistics.

If I have to draw a line between statistics and ML is that ML learns, it means it can predict things, however statistics only gives you information about the data you have. But for sure statistics and ML overlap a lot.


Even that doesn't seem like a clear distinction?

If you ask me for the most likely new value for a dataset, I won't know. But if I graph a few things and then write a function to spit back the current mean or median, is that machine learning?

I'm not trying to be snarky there, I agree that the bulk of ML tools are fundamentally just statistical tricks with some layer of abstraction. As a result, I have a lot of trouble knowing how much abstraction justifies the ML title. I see some people using "statistics to produce unintuitive solutions" as a standard, but that just begs that we ask unintuitive to who?


I feel like it is foremost a matter of attitude of the practitioner. An applied statistician and a machine learning engineer may deliver exactly the same end product, just the reasoning and assumptions differ. Machine learning uses little to no assumptions, where statisticians do. I also feel that machine learning engineers have a bit less fear of building black boxes.

Caruana showed the cartoon of the difference between a statistician and a machine learning practitioner by showing a cliff. The statistician carefully inches to the edge, stomping her feet to see if the ground is still stable, then 10 meters before the edge she stops and draws her conclusions. The machine learning practitioner dives headfirst from the cliff, with a parachute that reads "cross-validation".

See also:

http://norvig.com/chomsky.html On Chomsky and the Two Cultures of Statistical Learning.

And http://projecteuclid.org/euclid.ss/1009213726 Statistical Modeling: The Two Cultures by Leo Breiman.

and this joke:

> Norvig teamed up with a Stanford statistician to prove that statisticians, data scientists and mathematicians think the same way. They hypothesized that, if they all received the same dataset, worked on it, and came back together, they’d find they all independently used the same techniques. So, they got a very large dataset and shared it between them.

> Norvig used the whole dataset and built a complex predictive model. The statistician took a 1% sample of the dataset, discarded the rest, and showed that the data met certain assumptions.

> The mathematician, believe it or not, didn’t even look at the dataset. Rather, he proved the characteristics of various formulas that could (in theory) be applied to the data.


> Even that doesn't seem like a clear distinction?

Obviously no, ML uses statistics as statistics uses Maths. But not all ML uses statistics, some algorithms are biological inspired (swarm optimization) other uses theory of information for classification.

The point of ML is you learn something from data, not necessarily with statistics, although it is used in a lot of algorithms. But also function optimization is used in a lot of algorithms. The boundaries are very fuzzy, but for sure not all ML uses statistics and not all statistics are ML.


> it means it can predict things

All the other Math areas call that kind of prediction by "interpolation". It's not a magical property that only ML has.

I'd draw the line by the name. An algorithm is ML if it includes the computer deriving a complex model based on data gathered on the field.


ML and statistics are a subset of maths. As I said the statistics and ML overlap and also function interpolation. But some ML algorithms are based on biological systems (like swarm optimization), or theory of information.

If you have a problem that you want to classify some vectors, you have different ways to do it. You call all of them ML, but some use statistics, others use interpolation, other uses theory of information, etc. The model doesn't have to be complex or require a lot of data. Instead of saying all the different techniques you sum up saying ML.


Indeed.

A lot of it went over the head because I don't know much classical statistics, but I read some articles by stats people that basically boiled down to the distinction not being in the techniques but in common assumptions, rigor, culture, etc.


Predicting things seems to be the primary purpose of statistics in many cases.


I don't think so. I think it is more similar to this description: https://www.isixsigma.com/tools-templates/sampling-data/stat...


I'd say describing uncertain process and measures is. If you have a good description you might be able to predict values as well.


The best distinction I've found between ML and statistics is the following.

Statistics is about modelling the underlying probability distribution that generates your data. A convergence/generalization/etc result will usually be dependent on this underlying distribution.

ML is when you don't care much about the underlying distribution (modulo regularity assumptions), and your model doesn't even come from the same family at all.

I.e. linear regression is usually statistics, because you often believe the underlying data looks like f(x) ~ f(x0) + f'(x0)(x-x0)). Random forests are machine learning because you don't actually think the real world secretly has a random forest flotaing around.


> A basic k-means or naive Bayes approach will show up in ML textbooks, but those aren't clearly different from "use some statistics to make a prediction".

Ha, sounds like a classification problem! Let's use ML to find the boundary.


ML = anything where parameters are learnt from data.

Yes, this means ML is "just" statistics - the distinction being that it is automated so you can run it on larger amounts of data quickly.

I thought this was pretty much an accepted definition.


An accepted definition:

"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E." - Tom Mitchell


> ML = anything where parameters are learnt from data.

In some ML algorithms you don't learn parameters. For example: some clustering algorithms are based on examples, not on parameters.

> Yes, this means ML is "just" statistics

So, a decision tree based on information theory would you call it statistics? Information theory and statistics are not clearly the same.

> I thought this was pretty much an accepted definition.

Machine Learning: A machine that learns (regardless it uses statistics, information theory, function optimization, biological inspiration or whatever)


Sure, I think minimization/maximization techniques (eg, as used in clustering and tree based learners) are generally regarded as ML too.


Actually there's a very clear definition of what types of problems ML ought be used for, and that category of problem is what defines it. Those familiar with regression (and stats in general) ought to be familiar with it already - it's an issue of relationship of datatype between independent and dependent variable.

In brief, you're going to run up against two types of data - categorical and continuous. (There are facets to this, eg ordinal, but these are really the elemental types of data). The relationship of datatype to independent/dependent variable is what determines what kind of analysis you may conduct.

Categorical Independent vs. Categorical Dependent, for example, is fairly restrictive, as makes logical sense. You may cross-tabulate, you may score likelihood based on previous observation, but obviously, because all of the data involved are non-numeric, there's no chance for regression, ANOVA, etc. Linear Regression is used when both independent and dependent variables are continuous, and cross-category differencing techniques like ANOVA may be used when the independent is categorical and the dependent is continuous.

The part you don't typically learn until grad school is when the independent is continuous and the dependent is categorical, ie, in ML, a classification problem. The standard statistical methods used as foundation for these problems are logistic regression, logit/probit. It's expansion of these methods that lead to ML in the first place.


If I'm reading this correctly, it's just wrong. Whatever the distinction between data analysis and ML might be, it is more than just whether your data and predicted quantities are discrete or continuous.

> Categorical Independent vs. Categorical Dependent, for example, is fairly restrictive, as makes logical sense. You may cross-tabulate, you may score likelihood based on previous observation, but obviously, because all of the data involved are non-numeric, there's no chance for regression, ANOVA, etc

If you are implying that categorical -> categorical predictions are not ML: as a counter example, natural language is a categorical (words) input that could be used to predict any number of categorical variables (parse trees, semantic categories, etc). I think it's safe to say that the field of NLP is doing machine learning.


Thanks for the sanity check. I read that reply, and got bogged down enough that I was worried my initial reaction of "what, that's not relevant!" was born of ignorance. Discrete/continuous is a distinction worth making, but as a hidden 'definition' for ML I really don't understand it.



This mirrors my own thoughts on the matter. Especially as regards the "branding" issue.


The value added isn't in the use of ML techniques itself, it's in the hype train that fills the Valley these days: our customers see "Data Science product" and don't get that it's really basic predictive analytics under the hood. I'm not sure the product would actually sell as well as it does without that labeling.

So you are misleading your customers through omission? This is the kind of thing that makes people question anyone stating they are using ML. Those of us actually implementing ML techniques (aka training neural nets and automating processes with data) are met with unnecessary skepticism as a result.

edit: OP clarified his position since this post so take that into account when reading.


No, we actually use ML. We just don't need to, in my opinion, because the problems our products solve are more or less solvable without these techniques.

My point was that using ML, even though we don't need to, "adds value" by virtue of the hype train. We need ML to sell products, not to create them.

I do agree that this sort of arrangement lends itself to supporting skepticism around AI and ML. On the other hand I don't think that's a bad thing.


Got it. Thanks for the clarification. It is true that people are using ML where other, simpler options are available, but I wouldn't immediate discount the value of using nets for your problem. I don't know enough about your problem/implementation to speak to it really.


Yes, thanks for highlighting the deficiency in my original post. I can see it is easily interpretated as you did. I added a clarification (or what I hope is one).


I know a few companies riding the ml/nn train. Any chance you are based in NYC?


No, in the bay area. The hype is strong here.


Amazon Personalization.

We use ML/Deep Learning for customer to product recommendations and product to product recommendations. For years we used only algorithms based on basic statistics but we've found places where the machine learned models out perform the simpler models.

Here is our blog post and related GitHub repo: https://aws.amazon.com/blogs/big-data/generating-recommendat... https://github.com/amznlabs/amazon-dsstne

If you are interested in this space, we're always hiring. Shoot me an email ($my_hn_username@amazon.com) or visit https://www.amazon.jobs/en/teams/personalization-and-recomme...


So is this like the Amazon "feature" where I buy a coffee table on Amazon, then I get suggested to buy a coffee table EVERY DAY for 3 months. Literally row after row of coffee table? Because there must be a big pool of people who buy 1 coffee table buying more coffee tables immediately after?


It's a hard problem to determine the repeat purchase cadence of a product. At one end of the bell curve you have items re-purchased frequently, e.g. diapers or grocery, and on the other end you have items that are rarely repurchased.

I haven't looked at coffee tables specifically, but I know when I've looked at home products in the past I've been surprised at how frequently people will buy two large items, e.g. TVs or furniture, within a short period. That said, I agree there is room for improvement here. We're constantly running experiments to improve the customer experience, I have faith that in the limit things will improve. Again, we have no shortage of experimental power so if you'd like to join in the experimentation let me know :)


IMO it comes down to the fact that Amazon literally has my last 13 years of purchasing history, yet it seems that all they are doing is "you looked at x, lets show you y variations of that x."

My dream is that I go to Amazon.com and there are a ton of different unrelated products that people who purchase similar things as me buy. So if I only buy "buy it for life" kitchen equipment, it doesn't show me the most popular but crappy version of something, it shows me the one that I'd actually purchase.

Such an easy problem with suuuuuch a difficult solution though. Not to mention the obvious privacy concerns there.

Oh well, I know that they have good people working on the solution, and no chance I could do it better :p


This answer doesn't quite satisfy :-)

This topic must be extremely interesting (good suggestions could increase sales by a LOT) and smart people must have been working on it for quite a while.

- What is the fundamental reason why this is a hard problem?

- What's up with the coffee tables specifically, could you, for the hell of it, look into that category and tell us what the actual related products are? Let us (fail to) guess how these products are related, but don't let us hanging :-)


"It's a hard problem to determine the repeat purchase cadence of a product."

I don't think it is.


Do you work at Amazon, or do you have experience in this area? Care to elaborate?


Do you have any obfuscated training sets available to public?

edit:typo


must be the same genius technology that leads Amazon to load up my Prime frontpage with fashion accessories when I've never had any history of searching or buying such, and recommending the same shows "Mozart in the Jungle", "Transparent", "Catastrophe" on Fire TV stick for months even though I've never shown any interest in any of such programming, even after manually "improving recommendations" by clicking "Not Interested".

its amazing that the vaunted Amazon technology is unable to figure out an algorithm that would satisfy a user's deep desire "please stop plastering Jeffrey Tambor's lipstick and mascara covered face on my startup screen, I've gotten tired of looking at it for the past year"


perhaps you should change your Amazon password...


Your purchase was merely the inaugural move to establish your newfound hobby of coffee table collecting.


I read this same tweet last week too :)


Advertising is trained against ROI, not against what will "seem right" to the user.

Maybe in-market* furniture shoppers tend to spend a lot of money. Maybe furniture is a very profitable category. Even if the system is smart enough to assume there's only a 20% chance that you're in the process of significant furniture purchases, furniture ads may still be a better use of the ad slot than a lower value item where you have an 80% chance of being in-market.

Then why show the same damn coffee table over and over? Maybe that's more likely to return your attention to your furniture purchasing? I have no idea. Most likely, they don't know exactly either. Most likely, that's just what the highest-scoring current algorithm decided.

*The duration of "in-market" varies by category. Some product categories have a long consideration phase. For example car shoppers tend to spend 2-3 months considering alternative brands and models before they spend a few weeks narrowing down on a specific car configuration and exact pricing.


Haha yes, I remember seeing washing machines on my landing page for months after I bought one from Amazon. I mean, how many of them could a person need?

Seriously though, I don't understand why it's so hard to take this effect into account, as there should be a very strong negative correlation between a purchase in a given category and the probability of buying an article from that category in the near future, so even a simple ML algorithm should be able to pick this up easily. Anyone here who can explain why this is difficult?


The simple algorithm is to build a correlation matrix between of purchases between all items in the store. Then, when given an item to generate recommendations for, you provide the other items with the highest scores, with a "top sellers" correction for the items that are correlated with everything.

I used to work for a company that implemented similar recommendation services. We approached this problem by modelling whether or not a category was likely to have recurring purchases.


A pleasing explanation is that it is a book store.

(I'm not saying it is a good or likely explanation)


Obviously the goal of ML in this would be that feeding it enough data about users who buy coffee tables would eventually teach it that you probably don't want another coffee table (because who buys two coffee tables in a row?), but might want to buy say... end tables or other living room furniture in a matching style to the coffee table you just bought.

Not saying it works, but that'd be the goal.


Disclaimer: I know nothing about ML.

Would the standard models used allow for the fact that humans could, after buying a coffee table, choose to click on the coffee table in anticipation of then getting suggestions for similar furniture. Presumably the machine sees that the end goal of those continually clicking the same item is actually to arrive at similar items .. but wouldn't it be an obvious optimisation for Amazon to set the ML up to already look deeper than the first page reached.

I have a similar thing with Amazon, I don't know how you're supposed to access the bestseller list for a product type. I just know that if you search a product and follow related products that you eventually get a "#5 in ObscureProduct" tag and that tag takes you to the list of the top-10 models of ObscureProduct available. That sort of learnt navigation must play havoc with a suggestion algo (but IMO would be very easy to fix with just a link for any specific enough item to the 'top 10 in this category').


Predictive analytics tells them you need more coffee tables. Many many more.


Theory is that the recommendation engine is built for books. So if you buy a specific type of book, it recommends other books in the similar category. I guess they never got a chance to update it to reflect the fact that Amazon sells more than just books.


And if you buy a lot of coffee table books, you need more coffee tables.


I'm late - but that is actually called dynamic remarketing. You look at a certain category of item and then see ads (on amazon or off-site) for other items in that same category. If you actually bought the coffee table on a different device/browser/anywhere else.. then you'll see those ads for a while because they can't recognize that you actually made the purchase already.


I get similar for travel guide books which are the kind of think you buy once when visiting a place you have never visited before.

I get suggestions for travel guide books for the same country I visited a year ago for which I purchased a guide book.


It's more like you bought a coffee table and you get coffee beans in the recommendations. Also, you buddy who you are in the same group with gets a coffee table recommendation.

I guess this would make more sense.


    For years we used only algorithms based on basic statistics but we've found places where the machine learned models out perform the simpler models.
This is the right way to approach it. Too many people are looking for "deep" as some sort of silver bullet for an ill defined problem they have. If you can't validate against a simple model trained properly you are already in trouble. Likewise if you don't understand how to evaluate your generalization issues and how/if a simpler model will improve them.


Frankly, Amazon recommendations suck, they suck really hard.


We're a computer vision company, we do a lot of product detection + recognition + search, primarily for retailers, but we've also got revenue in other verticals with large volumes of imagery. My co-founder and I both did our thesis' on computer vision.

In our space, the recent AI / ML advances have made things possible that were simply not realistic before.

That being said, the hype around Deep Learning is getting pretty bad. Several of our competitors have gone out of business (even though they were using the magic of Deep Learning). For example, JustVisual went under a couple of months ago ($20M+ raised) and Slyce ($50M+ raised) is apparently being sold for pennies on the dollar later this month.

Yes, Deep Learning has made some very fundamental advances, but that doesn't mean it's going to make money just as magically!


Bingo.

There's a lot of "DL allows us to do X so we should make a product / service using DL to do X", rather than "We think there's value in something doing Y, what allows us to do Y? <research> DL allows us to do Y better than anything else, lets use DL"

You gave the example of Slyce. Their products are cool, but I can't help but think "is DL the best way to get that end result?" for lots of the things they do.


Can you expand more on "we do a lot of product detection + recognition + search, primarily for retailers" please? Is that something like identifying products in social media images or something?


We have several products, each of which serves different departments within retailers.

The exact things we do depends entirely on which department(s) are licensing it. Basically, anywhere there's a product image (from their own inventory to mobile to social) and we can provide some kind of help, we do. Every department needs totally different things, so it varies quite a bit...but it's all leveraging our core automated detection + recognition + search APIs.


From Coursera - we use ML in a few places:

1. Course Recommendations. We use low rank matrix factorization approaches to do recommendations, and are also looking into integrating other information sources (such as your career goals).

2. Search. Results are relevance ranked based on a variety of signals from popularity to learner preferences.

3. Learning. There's a lot of untapped potential here. We have done some research into peer grading de-biasing [1] and worked with folks at Stanford on studying how people learn to code [2].

We recently co-organized a NIPS workshop on ML for Education: http://ml4ed.cc . There's untapped potential in using ML to improve education.

[1] https://arxiv.org/pdf/1307.2579.pdf

[2] http://jonathan-huang.org/research/pubs/moocshop13/codeweb.h...


I'm curious, because this is something that I was interested in doing for brick and mortar universities, what aspects do you use to do your recommendations. That is, is it just a x/5 rating per user that is thrown into a latent factor model, or do you do anything else (like dividing course 'grade' vs. opinion along two axes manually?)


Are you just weighing different scores on 2? That would be heuristics more precisely. Not really learning; Unless you update the weights my minimizing some cost function.


At Sumo Logic we do "grep in cloud as a service". We use machine learning to do pattern clustering. Using lines of text to learn printfs they came from.

The primary advantage for customer is easier to use and troubleshoot faster.

https://www.sumologic.com/resource/featured-videos/demo-sumo...


This is great. I've been thinking about better ways to search logs for root causes. Splunk is good if you know what you are looking for, but this is exactly what I want to see to show me unexpected things in logs.


just a happy Sumologic user, saying hello and Thanks! Most of your product is great (I am ex splunk user)... The biggest complaint is that I can't cmd+click to open anything in new tabs as everything is so JS crazy front end.

overall the pattern matching stuff is pretty cool. Also, would like a see raw logs around this for when I am trying to debug event grouping errors based on the starting regex.


Can you elaborate on your improvement proposals?

E.g. With LogReduce you can click on group and see log lines that belongs to it. IS that something that solves your problem, or are you looking for something else.

Feel free to send me an email (it is on my profile).


Here at Matterport, our research team is using deep learning to understand the 3D spaces scanned by our customers. Deep learning is great for a company like ours, where so much of our data is visual in nature and extracting that information in a high-throughput way would have been impossible before the advent of deep learning.

One way we're applying this is automatic creation of panoramic tours. Real estate is a big market for us, and a key differentiator of our product is the ability to create a tour of a home that will play automatically as either a slideshow or a 3D fly-through. The problem is, creating these tours manually takes time, as it requires navigating a 3D model to find the best views of each room. We know these tours add significant value when selling a home, but many of our customers don't have the time to create them. In our research lab we're using deep learning to create tours automatically by identifying different rooms of the house and what views of them tend to be appealing. We are drawing from a training set of roughly a million user-generated views from manually created guided tours, a decent portion of which are labelled with room type.

It's less far along, but we're also looking at semantic segmentation for 3D geometry estimation, deep learning for improved depth data quality, and other applications of deep learning to 3D data. Our customers have scanned about 370,000 buildings, which works out to around 300 million RGBD images of real places.


Interesting. What is your training objective in deciding which view of the room would be the most appealing? Also, are you looking into generative models for creating new views from different angles based on existing views?


Our users have manually done a lot of the tasks we want to eventually do automatically, which effectively becomes data annotations for us to train on.


One of my coworkers used basic reinforcement learning to automate a task someone used to have to do manually. We have two data ingestion pipelines. One that we ingest immediately, and a second for our larger customers which is throttled during the day and ingested at night. For the throttled pipeline, we initially had hard coded rate limits, but as we made changes to our infrastructure, the throttle was processing a different amount than it should have been. Sometimes it would process too much, and we would start to see latency build up in our normal pipeline, and other times it processed too little. For a short period of time, we had the hard coded throttle with a Slack command to override the default. This allowed an enginneer to change the rate limit if they saw we were ingesting to little or too much. While this worked, it was common that an engineer wasn't paying attention, and we would process the wrong amount for a period of time. One of my coworkers used extremely basic reinforcement learning to make the throttle dynamic. It looks at the latency of the normal ingestion pipeline, and based on that, decides how high to set the rate limit on the throttled pipeline. Thanks to him, the throttle will automatically process as much as it can, and no one needs to watch it.

The same coworker also used decision trees to analyze query performance. He trained a decision tree on the words contained in the raw SQL query and the query plan. Anyone could then read the decision tree to understand what properties of a query made that query slow. There's been times we're we've noticed some queries having odd behavior going on, such as some queries having unusually high planning time. When something like this happens, we are able to train a decision tree based on the odd behavior we've noticed. We can then read the decision tree to see what queries have the weird behavior.


It sounds like a simple PID loop would be sufficient to solve this problem. You have a control valve and an error signal. No need for anything more complicated.


It is a PID loop, which I guess may not be considered to be actual reinforcement learning.


the decision tree for sql analysis sounds great


At Persyst we use neural networks for EEG interpretation. Our latest version has human-level performance for epileptogenic spike detection. We are now working on bringing the seizure detection algorithm to human-level performance.


Using neural networks to model neural networks is adorably meta.


I was wondering the other day if anyone had applied this technology to EKGs. Do you also do that?


Funny you should ask, detecting QRS complexes has been my first project since starting here. I know of a few papers where the authors have applied neural networks to EKGs, but the applications have been purely academic. I'm not aware of any other companies that use NNs in practice. (There may well be some, but they tend to be secretive about how their algorithms work.) At any rate, the false positive rate of our software is now about an order of magnitude lower than anything else on the market.


Congrats on your application. Sounds very useful.

And thanks for the info. I worked years ago on a training program for EKGs and it seemed like a field ripe for application of ML and AI.


We use them for EMG data/interpretation as well.


Neat - I used convolutional neural networks to classify electrocorticographic signals during my PhD work. I'll definitely check you guys out!


The startup I'm part of uses ML to predict which end users are likely to churn for our customers.

We work with B2B and B2C SAAS, mobile apps and games, and e-commerce. For each of them, it is a generalized solution customized to allow them to know which end users are most at risk of churning. The amount of time range varies depending on their customer lifecycles, but for longest lifecycles we can, with high precision, predict churn more than 6 months ahead of actual attrition.

Even more important than "who is at risk?" is "why are they at risk?". To answer this we highlight patterns and sets of behavior that are positively and negatively associated with churn, so that our customers have a reason to reach out, and are armed with specific behaviors they want to encourage, discourage, or modify.

This enables our customers to try to save their accounts / users. This can work through a variety of means, campaigns being the most common. For our B2B customers, the account managers have high confidence about whom they need to contact and why.

All of this includes regular model retraining, to take into account new user events and behaviors, new product updates, etc. We are confident in our solution and offer our customers a free trial to allow us to prove ourselves.

I can't share details, but we just signed our biggest contract yet, as of this morning. :)

For more http://appuri.com/

A recent whitepaper "Predicting User Churn with Machine Learning" http://resources.appuri.com/predicting_user_churn_ml/


We're a very retention focused energy company. I just signed up for a trial. Count me interested! :)


We exclusively rely on ML for our core product at Diffbot: automatic data extraction from web pages (articles, products, images, discussion threads, more in the pipeline), cross-site data normalization, etc. It's interesting and challenging work, but a definite point of pride for us to be a profitable AI-powered entity.


Are you guys familiar with the DeepDive work from Christopher Re's group at Stanford?


Or his company Lattice for that matter.


Yes to both!


Oh interesting. I've used diffbot and never thought Diffbot relies on AI. Could you elaborate? I thought it's a simple crawling and parsing task but I might be naive on this.


Here's a slightly more detailed description: https://www.quora.com/What-is-the-algorithm-used-by-Diffbot-...

All identification and extraction in our APIs is based on our ML models, which have been fed hundreds of thousands of data-point examples from annotated web pages. Basically: our back end has reviewed millions of web pages to learn what various components of a page are -- and even what "type" of page a page is -- and uses that to make judgments on ones submitted via API.


Our low-latency trading group uses regression widely. We have experimented with more complex models but haven't found a compelling use for them yet.


We use ML to model complex interactions in electrical grids in order to make decisions that improve grid efficiency, which has been (at least in the short term) more effective than using an optimizer and trying to iterate on problem specification to get better results.

Generally speaking, I think if you know your data relationships you don't need ML. If you don't, it can be especially useful.


Interesting, do you have a write up for someone interested in the field? What company do you work for?


My company builds software to analyze customer feedback.

We use "real" ML for sentiment classification, as well as some of our natural language processing and opinion mining tools. However, most of the value comes from simple statistical analysis/probabilities/ratios, as other commenters mentioned. The ML is really important for determining that a certain customer was angry in a feedback comment, but less important in highlighting trending topics over time, for example.


What do you mean by "real"?


Sorry, using "real" in quotes wasn't too descriptive.

A few machine learning-based classifiers (we've used Bayesian and SVM approaches). Word embeddings and topic modeling (similar to word2vec) which are based on shallow neural networks.

Those are a few of what I would consider the "real" machine learning tools we use. Most of the application, though, is statistics/pattern recognition/visualizations on top of the data calculated by the ML approaches.

The interesting thing is (in my opinion/experience) that a 10% improvement in some of the ML performance (a 10% increase in accuracy, for example) will translate to a 1-3% improvement in end user experience (they see slightly better insights and patterns, but it is a marginal improvement). On the other hand, layering a new visualization or statistical heuristic on top of the data can lead to a significant boost in user experience.

Again, this is just for our specific application/domain, but we focus on making the ML results more accessible to users instead of focusing on the marginal accuracy of the ML results themselves.


Detecting fraud. I work for a credit card company.

Not really a new application though...


FinTech: Credit risk modeling. Spend prediction. Loss prediction. Fraud and AML detection. Intrusion detection. Email routing. Bandit testing. Optimizing planning/ task scheduling. Customer segmentation. Face- and document detection. Search/analytics. Chat bots. Sentiment analysis. Topic analysis. Churn detection.


I can imagine that fin tech will love it. Everything will go wrong one day in the future and no one will know the reason.


Why would they love something that goes wrong?


We've been using "lite" ML for phenotype adjudication in electronic health records with mild success. Random forests and support vector machines will outperform simple linear regression when disease symptoms/progression don't neatly map to hospital billing codes.


In my last job at a big telco I was working with/on a scorecard driven next-best-offer system steering 80-90% of all outbound callcenter activities. I would not call it AI/ML because the scorecards were built with good old logistic regression and were pretty old (bad) but the process made us 25 M €/year (calculated NPV). I don't know how much of it was added by the scoring process. We also had a real-time system for SMS marketing built on the top of the same next-best-offer system making 12+ M €/year (real profit).

On the other hand I found an internal fraud costing us 2-3 M €/year applying only the weak law of big numbers. Big corp, big numbers.

Now I build a similar system for a smaller company. I think we will stick mainly to logistic regression. I actually use "neural networks" with hand-crafted hidden layers to identify buying patterns in our grocery store shopping cart data. It works pretty well from a statistical point of view but it is still a gimmick used to acquire new b2b partners.


Here at Qualia (qualia.ai) we process mostly textual data from online sources (news, blogs, social media, internal data). Our background is in NLP when back in the days AI meant deep parsing, HPSG, tree-adjoining grammars, synsets, frames and speech acts, discourse, and different flavors of knowledge representations. It also meant LISP and Prolog. The domain quickly evolved from knowledge and rule-based to data-driven and statistical, mostly thanks to Brown and the IBM MT team in the 90s (that are now part of the Renaissance Fund).

We use hierarchical clustering for topic detection. We also work on topic models (Blei and his legacy). We use word embeddings for information retrieval and various ML algorithms for different applications of mood and emotional learning: Bayes, SVM, Winnow (linear models) and sometimes decision trees and lists. We also learn from past events and crises in order to create models, mostly statistical, and try to estimate how an event might evolve in the future. We have also tried graph-based community detection algorithms on Twitter (min-cut). Finally we have experimented with non-linear statistical analysis on micro-blogging data, by applying methods such as correlation functions, escape times, and multi-step Markov chains (but with limited success).

I 'd like to add here that I feel ML is well defined (supervised, semi-supervised, unsupervised and using unlabeled data), statistical learning is more fuzzy (a good starting point is Vapnik's work) and regarding AI, I am not sure I know any more what it means! I am always open to discussion and ideas. Let me know.


We leverage machine learning in the asset replacement modeling space. Basically there is an optimum time to sell your vehicle and purchase a new one based on our model. Our company works with large fleet organizations and provides analytics suite for vehicle replacement, mechanic staffing, benchmarking, telematics and other aspects of fleet management.


Is this useful for individuals also? I would really like to know the optimal time to sell my car. Or is this more like chart analysis which only works as long as the people having access to that information is limited?


Theoretically it could be, but most fleets gather much more data about their vehicles than an average consumer. For example all repairs, parts and labor costs, maintenance, mileage, engine hours and much much more. In addition, majority now leverage telematics which greatly improves the resolution and depth of this data. This data is quite necessary to make our model work. From high level perspective though most consumers sell their vehicles way before that optimal time frame.


I work at Periscope Data - we do our own lead scoring using home-baked ML through SciPy. It was interesting to see it play out in the real-world - interpretation of features/parameters was definitely important to the people driving the marketing/sales orgs.

We also support linear regression in the product itself - it was actually an on-boarding project for one of the engineers who joined this year, and he wrote a blog post to show them off: https://www.periscopedata.com/blog/movie-trendlines.html About 1/3rd of our customers are using trendlines, which is pretty good, but we haven't gotten enough requests for more complex ML algorithms to warrant focusing feature development there yet.


We use Convolutional Networks for semantic segmentation [1] to identify objects in the users environment to build better recommendation systems and to identify planes (floor, wall, ceiling) to give us better localization of the camera pose for height estimates. All from RGB images.

[1] https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn...


Once an analyst has manually reviewed something, a software system updates a row in a database to mark it as done. Our marketing team calls this machine learning, because the system "learns" not to give analysts the same work twice.

We also use ML to classify bittorrent filenames into media categories, but it's pretty trivial and frankly the initial heuristics applied to clean the data do more of the work than the ML achieves.


We use deep learning at attentive.ai to generate alerts based on unusual events in surveillance video.

We use neural nets to generate descriptors of videos where motion is observed, and classify events as normal/abnormal.


Based on past experimental data, we use ML to predict how effective a given CRISPR target site will be. This information is very valuable to our clients.


That sounds interesting, especially given a good enough physical model could compute that de-novo.


Machine learning is great for helping you understand a new dataset quickly. I often train a basic logistic regression classifier and introspect the coefficients to learn what features are important, which are unimportant, and how they are correlated.

There are a number of other statistical techniques you can use for this but scikit-learn makes this very very easy to do.


Pretty basic here.. we are a payments processor so we check volume, average ticket $, credit score and things of that nature to determine the quality and lifetime of a new merchant account.


- We use a complex multivariate model to predict customer conversion and prioritize lead response - We use text analysis to improve content for effectiveness and conversion Among other things


Can you please explain complex multivariate model in detail? I am curious to learn about it


Type Introduction/Tutorial Multivariate Statistics into Google. I saw quite a few with those words in title. Probably what you want.


I would suspect AI/ML profits come largely from improving ad revenue at very stable companies.


I think a lot of the real benefits from ML "at work" is more in just cleaning of data and running through the gauntlet of simplest regressions (before jumping onto something more magical whose outputs and decision making process you can't exactly explain to someone).

I would classify something like this blog post as ML, would you? http://stackoverflow.blog/2016/11/How-Do-Developers-in-New-Y...


When people talk about the growth (or sometimes 'excess') of ML solutions these days, I always wonder about this.

A basic linear regression probably isn't ML, a backprop neural net clearly is, but somewhere between the two is a very fuzzy line between "statistics and data cleaning" and "actually machine learning". I think a lot of people have just pushed the ML angle of an already-reasonable approach to tie into that popularity.


I think these distinctions are akin to discussing musical genre in Youtube comments. Something can be all of: mathematics, advanced IT, computer science, data science, statistics, econometrics, control theory, pattern recognition, machine learning, predictive modeling, data mining, data analysis, data engineering, or even AI if you are feeling fancy.


ML courses often start with linear regression, and if you build up complicated polynomials to find a nonintuitive model of your problem, I would definitely consider that machine learning.


I wouldn't. I'd call it "basic computational statistics." But I think I might be in the minority on that.


I've always thought of regressions, even high-order ones, as just a statistical tool. They're present at the start of ML courses, sure, but as a tool used in ML techniques or a good alternative to them.

It looks like that's not the standard view, though.


The whole deal seems weird to me.

Neural networks are just functional approximators, so why isn't a linear regressor of k-th order (e.g. Taylor expansion up to k-th order) also considered "ML"? What's the distinction here?


I like this definition of ML.

Off topic: somehow the people not doing data cleaning because it is so boring and ML techniques do not need clean data end up with the worst overfitting I have ever seen.


Raising money from clueless investors


I've got some ocean-front property in Arizona I'd like to sell ... I know it's a premium price but it's worth it!


"Our AI models show that California will sink into the ocean, and you'll be ahead of the curve with your own Arizona Bay beachfront property!"


Numerai is analyzing HN comments as a metric for choosing stock trades and has just shorted all west coast companies.


Are you training AI to determine which investors are clueless? Sounds like a good investment!


Nothing in my department yet, but we actually have a guy actively looking for a reason to implement some kind of ML so we can say our product "has it" I guess.


Yep, our tech guys are constantly looking for ways to implement things that may or may not be useful, or even understood; we've just gotta be able to say we have the latest in machine learned blockchain-based buzzword doodads to constantly reinforce our reputation as the most "high tech" organisation in our sector.


Sounds like you work at Xerox!


I run a deep learning company focused on a lot of banking and telco fraud workloads like [1]. We have also done dl to predict failing services to auto migrate workloads before server failure.

The bulk of what we do is anomaly detection.

[1] https://skymind.io/case-studies [2] insights.ubuntu.com/2016/04/25/making-deep-learning-accessible-on-openstack/


We realized that by adjusting training models we could incorporate autonomous recognition of not only images but intent and behavior into our application suite.


Deep learning to identify available space in kit from images! We are dead proud of it !

Trad learning for many applicatons : fault detection, risk management for installations, job allocation, incident detection (early warning of big things), content recommendation, media purchase advice, others....

Probabilistic learning for inventory repair - but this is not yet to impact, the results are great but the advice has not yet been ratified and productionised.


I'm using some of the pre-built libraries to find/fix low hanging fruit of data quality issues for https://www.findlectures.com, for instance finding speaker names.

The first pass is usually a regex to find names, then for what's left run a natural language tool to find candidate names, and then manual entry.


At our data science company, we're building a marketing automation platform that uses deep reinforcement learning to optimize email marketing campaigns.

Marketers create their messages and define their goals (e.g., purchasing a product, using an app) and it learns what and when to message customers to drive them towards those goals. Basically, it turns marketing drip campaigns into a game and learns how to win it :)

We're seeing some pretty get results so far in our private beta (e.g., more goals reached, fewer emails sent), and excited to launch into public beta later this month.

For more info, check out https://www.optimail.io or read our Strong blog post at http://www.strong.io/blog/optimail-email-marketing-artificia....


That's very interesting case. In my company, we would also like to optimize email marketing campaign using RL. However, based on my little experience using RL, (please correct me if I'm wrong) wouldn't it take long to iterate and update the V and policy function (or Q function if we use Q-learning), so I'm a bit skeptical if it can be used for real world case where we need to wait days to get the email response as feedback from the environment.


Great points. It's definitely more challenging than learning to play a simple arcade game or something, where feedback is invariant and often instantaneous. To address these challenges, we use a combination of (1) heuristics tailoring our RL algorithms to the problem at hand, (2) many converging sources of feedback. Most importantly, as with any machine learning implementation, it works in practice — our AI-driven campaigns beat randomized, control conditions!


I was doing something similar in email marketing. Used decision tree models with a lot of feature engineering to help predict email open rates.


At Graphistry, we help investigate & correlate events, initially for security logs. E.g., Splunk & Sumo centralize data and expose grep + bar charts, then we add visual graph analytics that surfaces entities, events, & how they connect/correlate. "It started here, then went there, ..." . We currently do basic ML for clustering / dimensionality reduction, where the focus is on exposing many search hits more sanely.

Also, some GPU goodness for 10-100X visual scale, and now we're working on investigation automation on top :)


Helping to moderate comments on theguardian.com!

https://skillsmatter.com/skillscasts/9105-detecting-antisoci...

(We're still beginners as will be apparent from the video but it's proving useful so far. I should note, we do have 'proper' data scientists too, but they are mostly working on audience analysis/personalisation).


We're building models of human behavior to provide interactive intelligent agents with a conversational interface. AI/ML is literally the backbone of what we're doing.


Providing users the best recommendations so they participate more, get more from the service, and churn less. Detecting fraud and so saving money. Predicting users who are about to leave and allowing us to reach out to them. Dynamic pricing to take optimum advantage of the supply and demand curve. Delayed release of product so it doesn't all get reserved immediately and people don't have to camp the release times.


Wrote a grammar checker that used both ML models and rules (which in turn used e.g. part-of-speech taggers based on ML).

Wrote a system for automatically grading kids' essays (think the lame "summarize this passage"-type passages on standardized tests). In that case it was actually a platform for machine learning - ie, plumb together feature modules into modeling modules and compare output model results.


At ScreenSquid we use statistical analysis to find screen recordings of the most active users on your website. This saves our customers a ton of time avoiding playing with filters trying to find "good" recordings.

https://screensquid.com/2016/12/introducing-star-ratings/


Hierarchical clustering?


Predict probability of car accidents based on the sensors of your smartphone


How do you turn these predictions into cash?


Highly targeted ads for lawyers and healthcare after a crash.


insurance perhaps? the company zendrive is doing something similar


Our main product uses machine learning and natural language processing to predict how long JIRA tickets are going to take to resolve.

(www.queckt.com is anyone's interested)

Without AI/ML, we wouldn't have a product.


I have a follow on question to this to all the respondents: Can you briefly describe the architecture you are using? Cloud-based offering vs self-hosted, software libraries used, etc...


We use machine learning to detect anomalies on our customers' data and alert them of potential problems. It's not fancy or cutting edge, but it provides value.


We use RNNs for voice keyword recognition.


Slightly tangential, but how do you collect training data for AI/ML models you are developing


We are using machine learning to identify software as benign software or malware for customers.


Sift's product is based on ML.


Lots of KYC things, like fraud, AML and CTF. Helps with finding new patterns.


I run a company that specializes in design & implementation of kick-ass ML solutions [1]. We've had successful projects in quite a few industries at this point:

LEGAL INDUSTRY

Aka e-discovery [2]: produce digital documents in legal proceedings.

What was special: stringent requirements on statistical robustness! (the opposing party can challenge your process in court -- everything about way you build your datasets or measure the production recall the has to be absolutely bullet proof)

IT & SECURITY

Anomaly detection in system usage patterns (with features like process load, frequency, volume) using NNs.

What was special: extra features from document content (type of document being accessed, topic modeling, classification).

MEDIA

Built tiered IAB classification [3] for magazine and newspaper articles.

Built a topic modeling system to automatically discover themes in large document collections (articles, tweets), to replace manual taxonomies and tagging, for consistent KPI tracking.

What was special: massive data volumes, real-time processing.

REAL ESTATE

Built a recommendation engine that automatically assembles newsletters, and learns user preferences from their feedback (newsletter clicks), using multi-arm bandits.

What was special: exploration / exploitation tradeoff from implicit and explicit feedback. Topic modeling to get relevant features.

LIBRARY DISCOVERY

Built a search engine (which is called "discovery" in this industry), based on Elasticsearch.

What was special: we added a special plugin for "related article" recommendations, based on semantic analysis on article content (LDA, LSI).

HUMAN RESOURCES (HR)

Advised on an engine to automatically match CVs to job descriptions.

Built an ML engine to automatically route incoming job positions to hierarchy of some 1,000 pre-defined job categories.

Built a system to automatically extract structured information from (barely structured) CV PDFs.

Built a ML system to build "user profiles" from enterprise data (logs, wikis), then automatically match incoming help requests in plain text to domain experts.

What was special: Used bayesian inference to handle knowledge uncertainty and combine information from multiple sources.

TRANSPORTATION

Built a system to extract structured fixtures and cargoes from unstructured provider data (emails, attachments).

What was special: deep learning architecture on character level, to handle the massive amount of noise and variance.

BANKING

Built a system to automatically navigate banking sites for US banks, and scrape them on behalf of the user, using their provided username/password/MFA.

What was special: PITA of headless browsing. The ML part of identifying forms, pages and transactions was comparatively straightforward.

--------------

... and a bunch of others :)

Overall, in all cases, lots of tinkering and careful analysis to build something that actually works, as each industry is different and needs lots of SME. The dream of a "turn-key general-purpose ML" is still ways off, recent AI hype notwithstanding.

[1] http://rare-technologies.com/

[2] https://en.wikipedia.org/wiki/Electronic_discovery

[3] https://www.iab.com/guidelines/iab-quality-assurance-guideli...


We use ML for recommendation systems (I work at a Classifieds company)


PCB autorouting


It strikes me that you could do this with an algorithmic approach - is there some additional factor when building PCBs that's specifically hard?

Is this one of those things like the bin packing problem [1] where on first glances you'd expect it to have a definitive solution but it's actually deceptively very hard?

[1] https://en.wikipedia.org/wiki/Bin_packing_problem


Can't say for what/where, but, yes. Use it to super-scale work of human analysts who evaluate the quality of some stuff.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: