Great to see this posted here! DuckDB is an integral part of an in-browser data analytics tool that I've been working on. It compiles to WASM and runs in a web worker. Queries against WASM DuckDB regularly run 10x faster than the original JavaScript implementation!
It's amazing to see Rust used so much even in web projects. It's my favorite language and I don't want to use it for web programming anymore. I've used it for too many "real" web apps (I mean webrtc signaling and web socket) to go through the pains of optimization.
But it's still fun to work with.
Heh, look up "vacuum cannon" on youtube. The tunnel would need to be slowly depressurized, otherwise anything in it would turn into a bullet when the atmosphere rushes in.
This also applies to any sort of failure that compromises the vacuum. A dipshit with a rifle could kill anybody in the system in a fraction of a second. Burying the tubes is probably the only way to avoid this, but thankfully the whole thing is vaporware anyway.
You don't need a full vaccum. Even half an atmosphere would greatly reduce air resistance and allow for 4x the velocity in 1 atm.
Humans can survive (uncomfortably) at .5 atm and you eliminate the vacuum cannon problem simply by putting pressurization valves periodically throughout the tube.
Hyperloop is not a full vacuum, only partial vacuum if I'm not mistaken. I don't know what kind of difference that would make on pressure but I assume that means that the tubes are not air tight and actually breathe.
Hyper-loop requires 99.99% of the air to be removed from the tube, so it is complete vacuum from the lethality / safety and structure / pressure perspective.
I am not sure what do you mean by 'ubes are not air tight and actually breathe', but they definitely are not air permeable.
Hold up, you can't just throw out a claim like "many animals are sentient" as if it's a statement of fact. You might be right, but there's a reason that "the hard problem of consciousness" is hard. We don't really have any way to distinguish sentience/non-sentience based on behavior. The whole concept is extremely mushy.
You're right but as I've stated before, sentience and consciousness are different terms, and sentience has a definition in which the idea that animals are sentient isn't all that controversial. Not a mathematical axiom sure, but it all depends on what you mean by sentience, and I'm going by the classic definition.
The illiteracy rate in the US is really quite incredible, with the federal government reporting something like 20% of the population to be "low literacy"
Wow - I really thought the first gp in this thread must be wrong, and then researched the UK and we are similar - around 20 to 25 percent. Blows my mind - it appears the levels haven't changed much since the 40s in the UK - which shocked me.
To paraphrase: yes, corporations function as agents, but their maximum performance is limited by the capabilities of their employees. A corporate AI may have a much broader set of skills than any single person, and may be able to tackle many concurrent tasks, but as an "intelligent agent" its decision making capabilities probably don't scale exponentially (or even linearly) with its headcount. In the sense that a corporation's maximum intelligence is likely to be in the same ballpark as the smartest humans, it can't be seen as a true superintelligence.
That's fair. I interpreted the comparison as being between "AI implemented as a property of human organizations" and "AI implemented as a powerful search algorithm". While corporations can certainly be dangerous, they're still made of people and thus are unlikely to want to do things like "convert the entire mass of the solar system into dollar bills" (and even if they did, they'd have a hard time doing it). A sufficiently powerful search algorithm would find all sorts of bizarre ways of satisfying its goals. The point is that the scope of the risk is quite different when talking about corporations (order of magnitude: screwing up the environment in pursuit of easy profit) vs "real" AI which, if granted agency, would have potentially unbounded risk.
Another way to look at it is with regards to capacity for self-modification. If a corporation can't re-structure itself into being much smarter than the smartest human, its intelligence is fundamentally limited, and therefore so is the risk. Does software have this restriction? We don't really know yet, but it's hard to point to exactly why it would.
How is "converting the entire mass of the solar system into dollar bills" conceptually different from "converting the habitability of Earth into dollar bills" or "converting the health problems of human beings into a maximal amount of dollar bills", which is precisely what many corporations _actually do_?
The "corporation as AI" metaphor isn't about some abstract future possibility, it's an explanatory mechanism for how the world is so thoroughly messed up _right now_.
Yeah, I agree. My original post was bringing up what I thought was an interesting comparison between hypothetical software AI and the corporations-as-AI metaphor presented in the talk.
Corporations _are_ acting as misaligned optimizers. Solving that problem is hugely important. However, the "AI" comparison breaks down somewhat when you start thinking about how we might actually fix the problem. With corporations, we (i.e., states) have tools that we can use to regulate bad actors. Software AI, however hypothetical at the moment, seems likely to be a different game all-together.
I don't know that we won't have tools, albeit different ones, to regulate a bad AI. AI need more than just intelligence and agency. They also need effective ways to interact and affect their environment. That boundary is where we are likely to develop tools to limit and regulate them.
If they are truly general AI then it's likely that their reaction to that limitation and regulation will be not dissimilar to a person's but I see no reason to assume that limiting them will be impossible.
Sure! I don't see any reason why it would be impossible either, but the (hypothetical) problems are very interesting. Starting with the most basic problem of all: how do we even specify what we want the AI to do? The whole field of AI safety is trying to figure out a way to write rules that an agent wouldn't instantly try to circumvent, and to find some way to provide basic guarantees about the behavior of a system that is incentivized to do bad things (just like corporations are incentivized to find loopholes in the law, hide their misdeeds, and maximize profits at the expense of the common good).
That assumes an ideal corporation in the physics sense - real world ones have "corruption" in the form of their employees engaging in their own agenda. Take sexual harassment, passing up talent from bigotry, and office politics. They actively harm the profitability and yet they exist at all levels.
The incentives are fundamentally what shape the systems including the corporations. Blaming corporations alone is a simplification - the same incentives converge to the same outcomes akin to how power vacuums are filled by warlords.
> They actively harm the profitability and yet they exist at all levels.
> The incentives are fundamentally what shape the systems including the corporations.
These are key observations, and sadly I suspect there's a Prisoner's Dilemma style thing going on which makes corruption, sexual harassment, bigotry, and office politics somehow "rational" and individually maximising behaviours (for corporations, as well as the individuals they're composed of) whenever any of the competing corporations are known or suspected to be behaving corruptly.
Combined with the combative nature of thinking/reporting about corporate results - where for example FAANG stock price performances are compared to each other, assigning winners and losers, without any relevant incentives or accolades for the entire tech industry having grown the value of the entire sector. A corporation with a 15% YoY increase is deemed a "loser" if some of it's competitors manages 20%.
And we've spent well over half a decade demonising anyone who criticises Capitalism - thus deeply entrenching incentives that are poor for society as a whole, but which have "less poor" outcomes for the corporations prepared to be most ethically barren.
Being generous and ethical is also a form of human corruption harming the corporate entity. From the corporation's hypothetical point of view, any time a person doesn't do exactly what the corporation needs, isn't the perfect mindless drone, (or isn't creative in just the right inoffensive way), they're a cancer cell in the corpus. But I think this goes for red tribe, as well as blue tribe thinking, as you posit.
After all, you'd be hard-pressed to argue that corporations, especially silicon valley corps, give more lip service to red tribesmen than blue. Maybe that's just because the blue tribe is the more powerful, and the corps are rightfully saying the magic words that allow them to keep their profits.
Or perhaps that your particular perspective is blinkered to seeing too short a timeframe?
There's a very good argument top be made, I think, that "capitalism" is the biggest and most dangerous ponzi scheme ever invented. Most people will happily participate thinking "this is fine .gif" while untapped suckers/resources keep dropping enough "return" to early "investors", but when the house of cards all collapses there will be no underlying foundation for the vast majority and only the people at the very very top of the pyramid scheme ever actually benefited at all.
Ever stopped to wonder why Musk ands Bezos are so interested in going to Mars???
>its intelligence is fundamentally limited, and therefore so is the risk
collective intelligences such as nation states are capable of producing tools such as nuclear arms or biological weapons whose impact on society may be civilisation ending.
It doesn't follow at all that a limit on intelligence implies a limit on risk. Collective intelligence is already high enough to produce threats that could wipe us off the planet without a problem. This is because most risks, once let loose on the world by someone intelligent enough, don't need to be intelligent in themselves to destroy humans. Viruses aren't intelligent at all.
Or for a more mundane example, we're already in the process of slow-cooking the planet for some stock-market gains
> If a corporation can't re-structure itself into being much smarter than the smartest human, its intelligence is fundamentally limited, and therefore so is the risk.
The assumption here is that risk correlates to intelligence. That doesn't seem to be borne out in history. Risk (the likelihood of bad outcomes) can be emergent, and arise from well-meaning, reasonably (not super) intelligent people operating within a simple framework.
> While corporations can certainly be dangerous, they're still made of people and thus are unlikely to want to do things like "convert the entire mass of the solar system into dollar bills"
I'm assuming this is satire, since that's apparently exactly the goal of corporations.
Just print a bill of a higher denomination and make your shareholders happy without destroying the solar system!
It's not the goal, it's the consequences of the rules of the game used to reach the goal.
If our economy was based on writing poetry and corporations would compete to write the most soul enticing profound poetry or art in general, the side effect wouldn't entail the depletion of resources and ruining of the ecosystem. Surely we'd find other ways to make our fellow human life miserable, perhaps all this art will make many suffer untold pains as the meaningfullness of our existence is unraveled, or whatever.
The rules of the game are the framework. The incentives are the driving force to push things down the gradient. The consequences are hard to predict. That works for wide range of rules and incentives
In fact sub-par artificial intelligence is probably at least as dangerous as sub-par human intelligence, when we let it loose with significant power/authority in the real world.
(for supporting evidence to this claim, see "high frequency trading", "self driving cars", and "the leader of the free world" <smirk>)
It doesn’t need to be super to do that — even bacteria are a type of von-Neuman machine that can turn us into more of themselves.
An ASI could convince you that turning yourself into a paperclip is a fun and exciting new opportunity to liberate you from $PERSONAL_PAIN and finally allow you freedom to engage in $PERSONAL_FANTASY.
Bacteria are self-limiting. Their offspring compete and they don't reshape their environment to be more friendly to their existence. They don't plan for space colonization either. They're a bit of a toy model for a maximizer, not the real threat of an intelligent, self-modifying maximizer.
The only argument that seems to be brought forward every time is "if a superintelligent being wants to do stuff, we can't do anything about it because the axiom is that superintelligence means omnipotence". This is not so different from any other religious argument since it's impossible to falsify and therefore meaningless in any scientific sense.
Superintelligence doesn't mean omnipotence. The thing is bound to physics. It's just smarter than you, to some greater or lesser (usually imagined as greater) degree.
Just NO. Artificial Stupidity (AS) will never be able to perform at the level of Actually Seen Stupidity (ASS). No matter how much you try to improve AS, God will always create a better ASS. You will always be limited by priors to create an AS, but ASS will evolve naturally, plus if God ever decides to use science and apply a Directed Acyclic Tangent (DAT) well, DAT ASS would be unbeatable.
Intelligence (artificial or human) as a plague - is an idea that's spun out interesting storylines already.
On a geological timescale scale, it's very likely that's what the historians of the hyper-intelligent shade of the colour blue will write about us:
"There was once a virulent disease emerging on a single watery planet 1/3rd of the way along a spiral arm in some uninteresting galaxy. Fortunately the Universe's immune system never even needed to take action. It was greedy and short lived and starved itself of resources and died out before it had even metastasised much beyond it's own gravity well, it barely made it to it's next closest planet, and sent nothing out of that solar system's gravity well but a couple of primitive machines and all that daytime TV that was fashionable for a brief period a few Type-O main sequences ago..."
> In the sense that a corporation's maximum intelligence is likely to be in the same ballpark as the smartest humans, it can't be seen as a true superintelligence.
In what sense? Certainly the ability of large groups of people to research and create new technology exceed the ability of a single individual. The R&D capabilities of a large corporation are going to vastly exceed that of a lone hermit in the woods.
There's also neglecting that a corporation can spend millions of person-hours a year on problems.
No individual line of code produced by a FAAMG is superhuman, but producing codebases of billions of lines of code is definitely outside the ability of any lone human. If you find yourself in court facing a corporation, you'll learn they can spend multiple human lifetimes learning relevant precedents, processing discovered evidence, and constructing their arguments.
It may not be runaway ever-improving superhuman AI, but pretending it is only as intelligent as the humans that make it up is missing the forests for the trees.
He breaks down efficiencies of large corporations (or really any organization) into two types of gains:
(1) gains from parallelism, and
(2) gains from synergy (e.g. people having better ideas by working in groups).
For (1) his argument is that while you get a much larger amount of work done, the maximum quality is just the max over the parallel units. E.g. if your R&D department consists of N scientists working independently, you'll get N times the work done, but the quality of each unit will be at best the quality of your best scientists.
For (2) his argument is essentially that the quality of ideas generated by human organizations is gated by the ability of people within the organization to recognize that the idea is good. He then does a simple simulation where he argues that if even if you have everyone in your company brainstorming great ideas, after a certain number of people, say 1000, you have diminishing returns on how good an idea you can generate and recognize as good. His claim is then, in contrast, an AI is not constrained by this limitation.
He doesn't seem to account for "standing on the shoulders of giants" effects where institutions build up knowledge over decades. That seems to me to imply a model similar to compound interest, at least over some time period before the corporation becomes stagnant and dies off.
Well there is also a "quantity vs quality" aspect of tasks akin to the difference between a group of house painter and a skilled artist. The single skilled artist would lack the throughput.
It is possible to some degree "team stack" to boost quality over any component larger group but it is inefficient and tends to have worse results than a smaller scaled more skilled one.
Of course in the real world many things require a mix of both effectively - too much labor for one or a small group of experts but too complex for an amateur horde to handle.
So I think the question is: when a group of people work together on something, can they generate ideas which are better than the best idea of any member of the group? If so, how much better? How does the effect scale as the size of the group increases?
An R&D lab certainly can do more work than any single person could do in their lifetime, but are the ideas fundamentally better? Can you point to an idea and say: "no person could possibly have thought of that"?
Q1: Yes and No. From what I've seen (anecdata) an group of people working together ten to generate better ideas than one person could. The No comes from the fact that oftentimes one person came up an idea that they wouldn't have found without the other people.
Q2: Much better. Normally through 2 ways: first, defense of an idea make you have to improve it so that it gets accepted. second if the idea is collaborated upon the multiple feedbacks on forming the idea improves it.
Q3: My first knee jerk reaction to this question was that you need to limit the amount of people involved because communication is key: say 6-9 people. But on further thought, if your communications channels are set up well (forums, slack, IRC etc.), then you can go to the wisdom of the crowds, so maybe scale would work.
Q4: If the R&D lab is set up right, yes, you could get fundamentally better ideas, for various reasons: - facilities, collab, standing on the shoulders of giants (by this I mean building on what others have already built) etc.
I'd say yes. Do you think the ideas that came from the Manhattan Project didn't exceed that of the best individual involved?
Or for a more extreme example - consider the totality of human civilization as an example of groups of humans working together. People like Einstein and Hawking wouldn't have been able to have had the ideas they did if they were dropped in 20,000 BC. The knowledge we have now isn't simply the work of individual animals with extraordinary thoughts, it's the result of large groups of these animals working together.
Now certainly the benefits in corporate R&D labs don't come anywhere close, but I find it hard to believe that they simply drop to zero.
In the video linked above, Robert Miles models organizational idea-generation as something like "everyone thinks of their best idea, and then we just pick the best from that set". That's definitely a simplification. I do think there's some notion of "synergy", where people working together can hammer an idea into shape more effectively than either could alone.
However, I suspect that there's also a counter-force, a "diminishing returns" effect: as more people get involved in an organization, coordination becomes more difficult, communication becomes more expensive. If this converges to some limit, then that's the smartest a human organization can be.
Maybe better methods of organizing knowledge could help raise that limit, or more effective modes of communication, but my suspicion is that the upper bound for "organizational decision making capability" is within one or two orders of magnitude of a single person, not vastly more. No idea how we'd actually quantify that, of course!
R&D labs aren't necessarily restricted to a single research group with a unique, well defined goal. Depending on their organization they can easily integrate more or less isolated units of self- or loosely-directed research driven by single individuals or small teams. Properly managed, I can see this kind of heterogeneity of agents performing at a level no single human ever could. There's a analogy with boosting to be made somewhere but I can't care to flesh it out right now.
Personally I saw it as a semi-sarcastic equivalence noting the lack of differences and a note about the Chinese Room. It is like describing human capabilities as equivalent to a black box algorithm - we might not know how our brain parses 2 + 2 = 4 but we know that it calculates and must follow some sort of procedure.
By that logic, wouldn't any organization, team or group of people doing things together be an ai?
Corporations aren't the only kind of grouping of people together one or more purposes as a unit. This definition also seems to ignore the 'artificial' part of artificial intelligence.
If we're going to stretch the definition of ai that thin, wouldn't a school of fish or a flock of birds be considered 'ai'? Or at least an ant colony or bee hive?
What about forests? They form vast networks of interdependent nodes through mycorrhizae that connect the roots of all the trees and shrubs in the forest. It's used to pass nutrients and information in the form of electrical and chemical signals throughout the network.
If I were to paraphrase the comparison, it might be something like:
"An AI is some constructed entity which, starting with a goal, uses its intelligence to make changes to the world in order to bring that goal to fruition. One of the things about AI that is worrying is that its system-of-values might have no relation whatsoever to those possessed by human beings, and the best way for it to accomplish its goals might be to do things that most people would consider quite nasty. AI systems that are trying to maximize something are in constant tension with the laws that we put in place to try to keep them from steamrolling over everything in pursuit of their goals. Corporations fit the bill, under the assumption that the goal is 'maximize shareholder value'."
While a nest of ants or a swarm of bees certainly exhibit interesting emergent behavior, perhaps even collective intelligence, they 1). aren't constructed and 2). don't have clear goals.
You could argue that smaller groups of people fit the bill too, I guess. Maybe the difference is that corporations tend to have distinct goals which outlive their members, and that they have special recognition under the law as independent entities. Not sure.
Bees and alors have a clear goal. To survive and thrive. Which is very similar to a company's actual goals. How they do it is ever changing. Corporations dont survive by making the same product for hundreds of years, so whatever subgoal they have keeps changing as well.
This is why standardized testing works against improving aggregate intelligence. We probably have not seen anything close to what human minds are capable of if nurtured to express their differences.
If we break AI apart and look at it; Artificial - human made. (As opposed to something emerging from nature)
Intelligence - ability to acquire and apply knowledge and skills, (oriented agency modulated by prior experience or observations)
Slightly tongue in cheeky; in most countries it seems corporations gained the right to vote before many minorities (or even majorities) [citation needed?]. If we assert that voting is the measure by which one affects political power
Corporations have multiple decision making nodes that each can have the intelligence level as a smart human, thus more smart human level decisions can be undertaken by a corporation than can be undertaken by a smart human. At the same time corporations can process much more data, store more data, and do more interesting things with data than the smartest human alone could do.