Hacker News new | past | comments | ask | show | jobs | submit login
Machine Learning Meets Economics, Part 2 (mldb.ai)
109 points by nicolaskruchten on April 20, 2016 | hide | past | favorite | 16 comments



So the Sell/Check/Recycle model only requires 33% of the labor compared to the Check only model. The author suggests that this means tripling production would be possible, but that depends on QA being the factories bottleneck. If QA isn't the bottleneck, than you might as well fire 2/3 of Quinn's QA workers. Hooray, the computer didn't take my job, but it took the jobs of the guy to my right and the gal to my left.


I got to the same conclusion Ai will increase efficiency so a company will need fewer workers. Ai augmented humans might be better than ai or humans alone but results in the same thing loss of human jobs. I have to say though most automation these days increases effiency for companies but passes the work to the end user. With true Ai that will probably change.


Hopefully these people get another job. After all they have proven they can reliably show up for work every single day and cope up with the corporate chores. It would be a waste not to employ them for some other work. That's what entrepreneurship is about.


Just a point of discussion:

> “Exactly,” agreed Danielle, “and just like with the previous project, the model will get some wrong, but this will be outweighed by the cost savings of not having to check every single gadget.”

And that's how we consumers go from being able to receive a 100% reliable product every time (i.e. "Quinn's team works very hard to avoid penalties") to needing to go through the hassle/delay/cost of returns. Sure the QA department "has to pay" for the return, but does that economic model accurately include the cost of lost goodwill?


Well, I'd object to the idea that even a careful human team in production can hit 100%. In domains where the human "gold standard" is somehow falsifiable (that is, not tautologically correct as in some judgment call situations), it always ends up being a numbers game until the humans can no longer provide 100%.

It's kind of frustrating when you're trying to sell ML-based solutions to a skeptic. I've found that executives will often try to poke holes in the predictions, especially if the ML solution is risky or potentially threatening to them.

It helps a lot to frame things with a known human error rate and cost, as the Data Scientist in the story does, because then the conversation becomes win-win (how do we optimize for best outcomes) rather than unwinnable (why isn't your fancy ML algorithm right about this example X which I can plainly see myself is wrong??).


> I've found that executives will often try to poke holes in the predictions, especially if the ML solution is risky or potentially threatening to them

In particular, automation that reduces headcount reduces their justifiable budget and therefore power within the firm, salary and benefits, and external status. For an example of the latter, Havard Business School asks you how many people you are currently managing when you apply for an MBA.

This creates a strong incentive to block any attempts at automation or increased efficiency, especially when said inefficiency is not reflected on the KPIs used to gauge the executive's performance. Customer satisfaction and error rates are rarely measured well, nor with a refresh rate sufficiently high to be such a KPI. Blocking is easiest to do by seeding mistrust in the person attempting to build the automation, and in the automation itself.

Part of being an effective data scientist/big data engineer/whatever the buzzword du jour is consists of figuring out what KPIs the executive wants to maximise and sell him on that instead. The good old "work on making your boss look good".


Well, as rlucas says, humans aren't 100% either, not to mention some testing is inherently destructive (consider pharmaceuticals) in which case 100% sampling is impossible.

Of course this has been understood for a while: cf https://en.wikipedia.org/wiki/Acceptance_sampling . In fact there's a whole ISO standard (2859) on how to do proper statistical sampling.


The main gist of the article is that you can use ML to catch "slam dunk" categories and give people the harder tasks.

Ten years ago or more, lumber mills started using computer techniques to suggest optimal cuts (which dimensions you can get out of a given log, understanding that larger dimension lumber has higher sale value per unit mass), and flashing the suggested cuts to an operator for review/approval, which only rarely is overridden.


> which only rarely is overridden

So what's holding them back from replacing the cutter with a machine as well?


Maybe they like the people cutting. Maybe the people do other jobs as well, like moving the wood in or out?


rounding error?


Sure, it's true that a given technology by itself doesn't lead to job losses.But this really doesn't scratch the surface concerning the way that the labor market has been transforming.

A human being is remarkable flexible that can do lots of things robots and computers can't do as well as program himself/herself quickly to do things it takes a long time to program a computer to do.

If a human being is also really cheap, and they are in many places, then a human being is a really good deal. And the situation of automation and (even more) the globalization of work, is that it reduces the margin price of labor - there's still demand for people but the demand for people at a lower price - and being flexible, people have accommodated.

But when life is cheap and work is constant, it degrades all those things that makes human society pleasant. Except for those few with money to burn and to some extent even for them.


Unfortunately the parameters of the algorithms, which are set by humans, end up being just as flawed. In the earlier example regarding the discarding of widgets, no cost was attributed to discarding of these widgets. Where do they go? Why is waste disposal free (or virtually free)? The environment is a finite place with tremendous monetary value. The fact that businesses are able to profit off the destruction of the environment (in this example, throwing away widgets that might not work) is a fundamental flaw in the economic formula. How can one discuss efficiency when one fails to factor the destruction of a valuable, non-renewable, finite resource in the process?


Not that I'm a futurist, but I predict that 2020 will be the decade where AI becomes unsettling (beyond image recognition, which is already becoming unsettling), and that 2030 will be the decade where AI has serious economic consequences.


Curious word choice: "economic conseqences"

What do you imagine they will be? What about economic benefits?


Think of DeepDream x 1000. First, start with self-driving cars and semis cutting out taxi and freight services. Then, move on to musicians and composers no longer being hired to do mundane (90%) work, or video game levels being procedurally generated with intelligent design. From there, go on to everything that could be automated today, but requires a little bit of human ingenuity.

Economic transactions will start to look very different than what's normally accepted today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: