Hacker News new | past | comments | ask | show | jobs | submit login

The decisive moment for AI will come when a program can run a corporation better than humans. That's not an unreasonable near-term achievement. A corporation is a system for maximizing a well-defined metric. Maximizing a well-defined metric is what machine learning systems do well.

If a system listened in on all communication within a company, for traffic, sentiment analysis, who responds to whom how fast, and what customers, customer service, and sales are all saying and doing, it would generate more data than a CEO could ever process. Somewhere in there are indicators of what's working and what isn't. That may be the next phase after processing customer data, which has kind of been mined out.

If this starts working, and companies run by algorithms start outperforming ones run by humans, stockholders will put their money into the companies that peform better. The machines will be in charge.

This is perhaps the destiny of the corporation.




> The decisive moment for AI will come when a program can run a corporation better than humans. That's not an unreasonable near-term achievement. A corporation is a system for maximizing a well-defined metric. Maximizing a well-defined metric is what machine learning systems do well.

Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."

As soon as an AI CEO starts trying to maximize a single, well-defined metric, the humans working there will start finding ways to satisfy that metric at the expense of everything else. There is no single mesaure, no matter how well-defined, that will remain a good measure in perpetuity as long as humans are capable of interpreting it in the light of their own self-interest.


How about profit as a well-defined metric?


Plenty of examples of corporations run into the ground in the name of profit


In the name is not the same as having profits.


You can also make crappy decisions that increase profit in the short term but burn out your employees and run the company into the ground in the long term.


I didn't say "short term profits".


Neither is having something as a goal and achieving it


A metric is supposed to be applied to results.


I've been saying AI won't be Skynet launching the nukes, it will be a corporation maximizing productivity with complete disregard for human needs heh


It will be painfully obvious within the decade that this is exactly what's happening right now. Corporations are AI optimizing a single metric. They only use humans so much as they need them to improve the metric. As they get better, automation improves and humans are needed less and less. There's a reason so many feel like cogs in a machine.


This already happened in 19th century. That's why we need the state (which is another AI when you think about it, with some hardcoded rules and periodical review (elections)).


In a free market, a corporation has to please its customers in order to maximize profit.

For example, Apple.


Tobacco companies. Humans can be enticed by short team pleasing at the expense of their long term destruction.


People do a lot of things for short term pleasure that risk heavy long term consequences. Societies don't have the right to decide for others what their choices should be.

Maybe outlaw ice cream next? Or how about marijuana smoking? (Oops, already tried that.) How about loud music (long term hearing damage)? How about motorcycles?


> Societies don't have the right to decide for others what their choices should be.

If an AGI starts inventing new and wonderful ways in which we can destroy ourselves, and we are taken in by it, we will have to restrict that, it's non optional.

You can make the argument that, say, current hard drugs should be legal - but I don't think there's a way to defend the position that any possible future 'thing' should always be legal/permitted regardless of negative effect.


An awful lot of harm has been done to people via one group who are sure they know what is best for others, and thereby are justified in forcing it upon them.

In fact, likely much more harm than those with evil intent.


> in order to maximize profit

I'm wondering where this idea comes from (I'm not criticizing you specifically, but that maxim).

Do no investors value stability, longevity, ethical behavior etc?


I tend to invest in companies whose products and behavior I like. It's not terribly surprising that they've done well - pleasing customers is good business. I've dumped stock in companies that began a business model of suing their customers to make money - and those companies (again unsurprisingly) tilted down.

Buy companies that customers love, sell companies that customers do business with only because they have to.


Risk profile and time horizon are central considerations for investors, absolutely.

Sometimes a fast, go-big-or-go-home trajectory is what they're looking to invest in.

The way to put ethics in a language investors understand is to price in externalities.


In doing so it often does things that customers wouldn't like if they knew about them.

For example child labour, slave labour, destroying environment, causing cancer, etc.


Apple is a net negative in my perspective. They contribute negatively.


Nobody makes you buy an Apple product.


> A corporation is a system for maximizing a well-defined metric.

Interesting theory. My perspective is that once a corporation thinks of itself that way, it's already going downhill. Once a measure becomes a target, it ceases to be a good measure.

Now I assume you're imagining the well-defined metric to be profit, in attempt to dodge the aforementioned problem. However, even leaving aside the problem of short- vs long-term profit, I don't think that metric has a sufficiently well-defined link with actions to be optimized with anything we can build in the near term. There's too much "common sense" involved that, AFAICT, we still don't have any idea how to handle.

Ed: I guess I sort of did the thing the OP told me not to. But I still don't think the ability to phrase the problem of running a corporation as "optimize profit" makes it likely to be accomplished sooner than any other AGI application.


But surely a general intelligence would recognize that maximizing short term profit at the expense of long term profits has its own set of trade offs. And indeed, that common sense or “je ne sais quoi” would also reasonably indicate general intelligence.

I think the “running a corporation” problem is actually a great example of how to measure a general intelligence not only because it has well defined success criteria but also because of the very open ended and general means by which it must be accomplished. Running a corporation requires communication, setting goals, creativity, number crunching, and lots of soft skills like branding or understanding how consumers/customers will react to things. But there is nothing uniquely human about it except that it needs to be able to make money from humans. Presumably if I’m paying for some well defined service like Search Ads I don’t particularly care if the corporation is run by a robot.

I do agree though that there is a bit of ambiguity here on what “run a corporation” means. I assume that with a lot of thought I could write a very procedural script that somehow incorporates, purchases a property, and rents it through a property management group.


Investors would go nuts betting on AI corporations which burned short and bright with huge returns in the meantime.


The book Blockchain Revolution covers this well. It postulates a decentralized autonomous organization (a bot, basically) that controls its money and can trade with other bots via smart contracts in cryptocurrency and self improve via machine learning. Its goal is to maximize profit, which it can do in various ways, both savory and unsavory to the human race.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: