> In the sense that a corporation's maximum intelligence is likely to be in the same ballpark as the smartest humans, it can't be seen as a true superintelligence.
In what sense? Certainly the ability of large groups of people to research and create new technology exceed the ability of a single individual. The R&D capabilities of a large corporation are going to vastly exceed that of a lone hermit in the woods.
There's also neglecting that a corporation can spend millions of person-hours a year on problems.
No individual line of code produced by a FAAMG is superhuman, but producing codebases of billions of lines of code is definitely outside the ability of any lone human. If you find yourself in court facing a corporation, you'll learn they can spend multiple human lifetimes learning relevant precedents, processing discovered evidence, and constructing their arguments.
It may not be runaway ever-improving superhuman AI, but pretending it is only as intelligent as the humans that make it up is missing the forests for the trees.
He breaks down efficiencies of large corporations (or really any organization) into two types of gains:
(1) gains from parallelism, and
(2) gains from synergy (e.g. people having better ideas by working in groups).
For (1) his argument is that while you get a much larger amount of work done, the maximum quality is just the max over the parallel units. E.g. if your R&D department consists of N scientists working independently, you'll get N times the work done, but the quality of each unit will be at best the quality of your best scientists.
For (2) his argument is essentially that the quality of ideas generated by human organizations is gated by the ability of people within the organization to recognize that the idea is good. He then does a simple simulation where he argues that if even if you have everyone in your company brainstorming great ideas, after a certain number of people, say 1000, you have diminishing returns on how good an idea you can generate and recognize as good. His claim is then, in contrast, an AI is not constrained by this limitation.
He doesn't seem to account for "standing on the shoulders of giants" effects where institutions build up knowledge over decades. That seems to me to imply a model similar to compound interest, at least over some time period before the corporation becomes stagnant and dies off.
Well there is also a "quantity vs quality" aspect of tasks akin to the difference between a group of house painter and a skilled artist. The single skilled artist would lack the throughput.
It is possible to some degree "team stack" to boost quality over any component larger group but it is inefficient and tends to have worse results than a smaller scaled more skilled one.
Of course in the real world many things require a mix of both effectively - too much labor for one or a small group of experts but too complex for an amateur horde to handle.
So I think the question is: when a group of people work together on something, can they generate ideas which are better than the best idea of any member of the group? If so, how much better? How does the effect scale as the size of the group increases?
An R&D lab certainly can do more work than any single person could do in their lifetime, but are the ideas fundamentally better? Can you point to an idea and say: "no person could possibly have thought of that"?
Q1: Yes and No. From what I've seen (anecdata) an group of people working together ten to generate better ideas than one person could. The No comes from the fact that oftentimes one person came up an idea that they wouldn't have found without the other people.
Q2: Much better. Normally through 2 ways: first, defense of an idea make you have to improve it so that it gets accepted. second if the idea is collaborated upon the multiple feedbacks on forming the idea improves it.
Q3: My first knee jerk reaction to this question was that you need to limit the amount of people involved because communication is key: say 6-9 people. But on further thought, if your communications channels are set up well (forums, slack, IRC etc.), then you can go to the wisdom of the crowds, so maybe scale would work.
Q4: If the R&D lab is set up right, yes, you could get fundamentally better ideas, for various reasons: - facilities, collab, standing on the shoulders of giants (by this I mean building on what others have already built) etc.
I'd say yes. Do you think the ideas that came from the Manhattan Project didn't exceed that of the best individual involved?
Or for a more extreme example - consider the totality of human civilization as an example of groups of humans working together. People like Einstein and Hawking wouldn't have been able to have had the ideas they did if they were dropped in 20,000 BC. The knowledge we have now isn't simply the work of individual animals with extraordinary thoughts, it's the result of large groups of these animals working together.
Now certainly the benefits in corporate R&D labs don't come anywhere close, but I find it hard to believe that they simply drop to zero.
In the video linked above, Robert Miles models organizational idea-generation as something like "everyone thinks of their best idea, and then we just pick the best from that set". That's definitely a simplification. I do think there's some notion of "synergy", where people working together can hammer an idea into shape more effectively than either could alone.
However, I suspect that there's also a counter-force, a "diminishing returns" effect: as more people get involved in an organization, coordination becomes more difficult, communication becomes more expensive. If this converges to some limit, then that's the smartest a human organization can be.
Maybe better methods of organizing knowledge could help raise that limit, or more effective modes of communication, but my suspicion is that the upper bound for "organizational decision making capability" is within one or two orders of magnitude of a single person, not vastly more. No idea how we'd actually quantify that, of course!
R&D labs aren't necessarily restricted to a single research group with a unique, well defined goal. Depending on their organization they can easily integrate more or less isolated units of self- or loosely-directed research driven by single individuals or small teams. Properly managed, I can see this kind of heterogeneity of agents performing at a level no single human ever could. There's a analogy with boosting to be made somewhere but I can't care to flesh it out right now.
In what sense? Certainly the ability of large groups of people to research and create new technology exceed the ability of a single individual. The R&D capabilities of a large corporation are going to vastly exceed that of a lone hermit in the woods.