He breaks down efficiencies of large corporations (or really any organization) into two types of gains:
(1) gains from parallelism, and
(2) gains from synergy (e.g. people having better ideas by working in groups).
For (1) his argument is that while you get a much larger amount of work done, the maximum quality is just the max over the parallel units. E.g. if your R&D department consists of N scientists working independently, you'll get N times the work done, but the quality of each unit will be at best the quality of your best scientists.
For (2) his argument is essentially that the quality of ideas generated by human organizations is gated by the ability of people within the organization to recognize that the idea is good. He then does a simple simulation where he argues that if even if you have everyone in your company brainstorming great ideas, after a certain number of people, say 1000, you have diminishing returns on how good an idea you can generate and recognize as good. His claim is then, in contrast, an AI is not constrained by this limitation.
He doesn't seem to account for "standing on the shoulders of giants" effects where institutions build up knowledge over decades. That seems to me to imply a model similar to compound interest, at least over some time period before the corporation becomes stagnant and dies off.
This is covered in the video.
He breaks down efficiencies of large corporations (or really any organization) into two types of gains:
(1) gains from parallelism, and (2) gains from synergy (e.g. people having better ideas by working in groups).
For (1) his argument is that while you get a much larger amount of work done, the maximum quality is just the max over the parallel units. E.g. if your R&D department consists of N scientists working independently, you'll get N times the work done, but the quality of each unit will be at best the quality of your best scientists.
For (2) his argument is essentially that the quality of ideas generated by human organizations is gated by the ability of people within the organization to recognize that the idea is good. He then does a simple simulation where he argues that if even if you have everyone in your company brainstorming great ideas, after a certain number of people, say 1000, you have diminishing returns on how good an idea you can generate and recognize as good. His claim is then, in contrast, an AI is not constrained by this limitation.
He doesn't seem to account for "standing on the shoulders of giants" effects where institutions build up knowledge over decades. That seems to me to imply a model similar to compound interest, at least over some time period before the corporation becomes stagnant and dies off.