> We will discover new jobs–we always do after a technological revolution
CGP grey put this well
> Imagine a pair of horses in the early 1900s talking about technology. One worries all these new mechanical muscles (cars etc) will make horses unnecessary. The other horse reminds him that everything so far has made their lives easier - remember all that farm work? Remember running coast-to-coast delivering mail? Remember riding into battle? All terrible! These new city jobs are pretty cushy, and with so many humans in the cities there will be more jobs for horses than ever. Even if this car thingy takes off, he might say, there will be new jobs for horses we can't imagine.
> But you know what happened. There are still working horses, but nothing like before. The horse population peaked in 1915 - from that point on it was nothing but down.
> There's no law of economics that says that better technology makes more better jobs for horses. It sounds shockingly dumb to even say that out loud. But swap horses for humans and suddenly people think it sounds about right.
It's fundamentally different. Horses might always want more and more things no matter how much they get. But they don't create demand in our economy, so we'll never know.
No matter how much humans get, they'll always want more. And other humans will have to come up with ways to meet that demand.
Sure - if we ever reach a time where the entire human race says "this is enough" and everything is automated, then there won't be any jobs. But we're a very far way away from that point. In 2013, 60% of people on the planet didn't even have a toilet [1].
There are obvious problems. In the Western world, it certainly seems like we don't have enough jobs for workers without certain skillsets - or at least jobs that employers are willing to pay minimum wage for.
Indeed, it's true, horses don't create demand, but neither do unemployed humans.
Our economy isn't centred around humans, it's centred around capital. Unemployable humans are as useless to it as horses, as they can't exchange their labour with capital owners (or become capital owners themselves).
Unemployable people might want more, but will they get more? No, they won't.
I think there's two distinct ideas being discussed. Can people be unemployed because of structural, societal, or political reasons? Sure, and I think most people would agree. We have plenty of example of that happening - depressions, recessions, mass refugees, etc.
But can people be unemployed because we run out of things for humans to do? That's what the video is arguing, but it won't be the case unless we've fully satisfied all of our desires. If we haven't, there are, by definition, still things to do.
This idea was particularly popular during the recession (the video was made on the tale end of it). It was common to hear that the high level of unemployment was because automation had taken our jobs or because Americans didn't have the skills for the jobs that were needed. But much of this was nonsense, and we saw employment gradually rebound as we moved out of the recession. Not only nonsense, but dangerous nonsense, since it leads people to entirely misunderstand a solvable problem.
But can people be unemployed because we run out of things for humans to do?
-- You'll always have something that some other human wants the human X to do. Whether that other person can pay human X sufficient money to make that activity their job is the question. And it seems plausible you'll reach a point where the answer will be no for a lot of people. You can call that "structural" if you wish but it seems related to the marginal utility provided by a given person and that goes down as one finds automated alternatives.
Anything that can be a job. Whatever people are willing to pay others to do, limited only by imagination.
And just from the definition of money, it's unlikely to find a scenario where people can't pay other people to do arbitrary labor. There will always be people that have a majority of wealth, however that's denoted
It's not possible for civilization as a whole to run out of jobs or money. both are human social concepts and can be re-imagined at any time
From the industrial revolution we've being living in a time of effectively limitless energy and without having considered the consequences of tapping that energy. Fossil fuels, and other resources are running out - or we can at least see the end of them - and the consequences of their use are starting to bite hard. Realistically we're looking at a 2degC average global temperature rise this century, and the destruction of ecosystems due to that relatively rapid rise is going to be massive.
When the energy and resources, that we've used to support a consumption economy, grow more scarce that economy will no longer work. We won't be able to get ourselves out of low employment by just consuming more and so making more jobs.
>There will always be people that have a majority of wealth, however that's denoted
That may be true but we can narrow the spread of wealth by a very large factor; and morally we should do so.
> both are human social concepts and can be re-imagined at any time
I think that's what's essentially being advocated (whether now or later): reimagining human social roles outside the wage/employment model (and perhaps distributing an "automation dividend" to ensure liquidity and market demand).
> I think there's two distinct ideas being discussed. Can people be unemployed because of structural, societal, or political reasons? Sure, and I think most people would agree. We have plenty of example of that happening - depressions, recessions, mass refugees, etc.
> But can people be unemployed because we run out of things for humans to do? That's what the video is arguing, but it won't be the case unless we've fully satisfied all of our desires. If we haven't, there are, by definition, still things to do.
The question at hand though is exactly why is this true of humans but not horses?
Well, there's slight of hand here on CGP Grays part (and sleight of hand in other parts of the video as well), since horses are not doing work out of their own volition and to meet their own needs. Correct for that, and the same will be true for horses - they're not going to run out of work to do until they've fulfilled their desires.
Now, I'm talking about typical desires as they stand now. Obviously you can have people creating increasingly exotic and bizarre desires to the point where it's impossible to satiate them all ("I want to own Antarctica"). It's also possible to hit resources limits, but that's a separate problem from "running out of work because of automation."
Consider a case where person A builds widget X, and person B builds widget Y, and they trade with each other. Automation takes over, they lose their jobs. If they continue getting the widgets they used to trade for (which are now created automatically), they simply have the same wealth as before but more time. If they don't have the widgets, they're now detached from the new automated widgets both in production and consumption. There's no reason they couldn't carry on the same work as before and continue to trade with each other. This might require societal and political will (they might need some sort of support to reach their previous arrangement), but if the desire for their old work is there and not being met, then there is work to be done.
This works right up until the inputs for Widget X and Widget Y require capital.
Let's say Person A builds Widget X, and needs Input 1 to produce a Widget X. Person B builds Widget Y and needs Input 2 to produce Widget Y.
Person A uses their capital from their job to purchase Input 1 and produces Widget X. Person B uses their capital Input 2 and produces Y. Person A and B trade widgets and are happy.
But now, as before, automation takes the jobs of Person A and Person B. They lose their jobs, but retain their desire for the opposite widget.
Person A no longer has the resources to acquire Input 1, and Person B can no longer acquire Input 2. Neither X nor Y get produced, and both have their needs but are unsatisfied.
There is no guarantee that new automation will continue to fulfill all the desires that were fulfilled previously.
> Consider a case where person A builds widget X, and person B builds widget Y, and they trade with each other. Automation takes over, they lose their jobs. If they continue getting the widgets they used to trade for (which are now created automatically), they simply have the same wealth as before but more time. If they don't have the widgets, they're now detached from the new automated widgets both in production and consumption. There's no reason they couldn't carry on the same work as before and continue to trade with each other. This might require societal and political will (they might need some sort of support to reach their previous arrangement), but if the desire for their old work is there and not being met, then there is work to be done.
I mean, sure this arrangement can happen. But why would it? Why would person A want to trade with person B when they can get their goods made by a robot cheaper and better?
> That's what the video is arguing, but it won't be the case unless we've fully satisfied all of our desires. If we haven't, there are, by definition, still things to do.
Obviously true, but that doesn't mean the things still left to do can be done by anyone.
In a world where there's not enough food or housing, everyone can be gainfully employed, since just about everyone can learn to plant seeds or cut wood.
In a world where all of our material desires are already met, plenty of people will still want e.g. new AAA-quality video games every year. But what are the people who aren't programmers or artists supposed to do?
That, and human population is still growing, meaning there are more and more humans who need jobs, yet as robots improve, the types of jobs left for humans get fewer. At some point, the lines will cross. Maybe it won't be soon, but if population keeps growing and robots keep being able to do more jobs, then eventually it will happen.
I just finished reading Bullshit Jobs. The author makes a good argument for the users that because we decided working is a moral good, we are also great at making up things for people to do.
For example: the ballooning of admin jobs in academia that add no benefit to services rendered other than keeping other useless admins busy.
According to the book, when you remove the bullshit jobs and bullshit work most of us already work 20 hour weeks or less.
> According to the book, when you remove the bullshit jobs and bullshit work most of us already work 20 hour weeks or less.
That wouldn't be a problem if the BS was more or less evenly distributed. But unfortunately the distribution is highly uneven, so eliminating the BS, even if you resolve to not cut paychecks, would leave quite a few folks entirely unemployed (so no paycheck at all), and quite a few more significantly underemployed (so at high risk of their employer consolidating three 5h/week jobs into one 15h/week job).
And if that happens to enough people they can simply decide that they should get their share of the robots' output. Hopefully more people will agree with that before then.
> That's what the video is arguing, but it won't be the case unless we've fully satisfied all of our desires. If we haven't, there are, by definition, still things to do.
If wealth continues to become increasingly centralised, the wealthy might be able to have all their desires fully satisfied, while the poor won't have enough power to demand that - which potentially creates a vicious cycle.
Labour accounts for about half of world income, from memory, with capital accounting for the other half of world income. If wealth centralises, then so does income.
Besides, you replied to try and refute a comment which was quite unambiguously about wealth inequality. It's possible to occasionally admit you were wrong, rather than doubling down when you are.
In western economies, Piketty showed that equality peaked approximately in the 1950's, and has been dropping since. (Almost) no one recommends we have another world war to decrease the assets owned by the rich, but Piketty, like the original article, also recommends a wealth tax (this article includes the significant refinement that the tax be on corporations and not individuals; Piketty does not go into details for his wealth tax proposal).
If you disagree with these points, please site evidence.
That’s largely an effect of bringing more regions of the world into the industrialized paradigm, whereas what the other commenter said describes what happens to people in industrialized countries.
> Continues? The world is more equal than it ever has been, at least since the invention of agriculture.
Sure, globally. But that's as a result of a rising floor beneath extreme poverty, ie. the percentage of people subsisting on less than $1.90 per day has been dropping.
Sure, that's only a part of the story, and a growing global middle class is nothing to sneeze at, but the reduction in poverty is projected to slow considerably over the next decade, with half a billion remaining in extreme poverty in 2030, 87% of whom will be concentrated in sub-saharan Africa.
Meanwhile, social mobility up and out of the middle class is also slowing considerably in the various advanced economies, as is downward mobility from the upper class into the middle class, while downward mobility from middle to working class is growing.
I don't mean to diminish the progress that has been made so far, but it is starting to look likely that these successes are stagnating.
> horses don't create demand, but neither do unemployed humans.
Sure they do, although almost certainly nowhere near enough to sustain our current situation and the system it's built atop of (from which the holders of capital seem to have benefited immensely), although perhaps this is exactly the point you were making.
It might not be recognizable to most people living in modernity today, but in general humans will always trade (by some means, not necessarily with "money") with other humans whenever relative specializations occur between them, which even in small groups is likely to happen naturally.
After all, it's not the exchange of paper bills or service to capital in particular that makes an economy, and there is always going to be demand of something and some supply of some of those things as long as humans are around and interacting, regardless of how miserable the existence is, or how "inefficient" such a system might be.
Indeed, horses like most humans do not create demand innately, they only do so insofar as their labour must be reproduced, which naturally is more for humans than horses because humans can negotiate wages, but not incredibly so for those at the bottom.
"It might not be recognizable to most people living in modernity today, but in general humans will always trade (by some means, not necessarily with "money") with other humans whenever relative specializations occur between them, which even in small groups is likely to happen naturally."
This is reductive. Yes, humans have always traded. But never before our current economic system has trade been the primary ordinator of people's lives - this is a modern creation, indeed core to property itself.
"After all, it's not the exchange of paper bills or service to capital in particular that makes an economy, and there is always going to be demand of something and some supply of some of those things as long as humans are around and interacting, regardless of how miserable the existence is, or how "inefficient" such a system might be."
This is also reductive in the same sense. Traditionally in human society, demand and supply were not, for the majority of people, the very core of one's life - that is a modern invention. And that was only possible in the framework where labour became service to capital before service to oneself. Outside this set of relations of production, the problem of runaway unemployment isn't possible, because employment itself is not a major productive force. And that's precisely how humans got into a situation like that of workhorses - or rather how workhorses got into the situation of humans!
Well, actually around value - which is different from capital. It just so happens that we use capital to trade a lot (but not all) of what we consider to be valuable.
Even in a world of abundance there would be things that are scarce but we still consider valuable. Social status for example.
Certainly, our economy is not centred around abstract value. Value for who? What kind of value? Does value rule the life of the average man, or does trading his labour for access to capital and a share of it's proceeds describe the relation one has to the economy more accurately?
Trade is not what I'm discussing as an abstract concept, but instead how we structure our relationship to sustenance and work. That is centred around capital, not value - hence the word "capitalism". Which is not necessarily a bad thing, in some ways this is a good organization for things we care about, but it's not mainly about maximizing value for everyone, it came about very mechanistically.
The economy is centered around factors of production, which include labor and capital, both of which are value inputs towards meeting some need in the marketplace, and both of whose market price follows supply and demand. The more valuable the input (i.e. scarce+necessary), the higher the market price.
Capital is valuable (high price) when it's needed and scarce, e.g in the junk bond market. Labor is valuable (high price) when it's needed and scarce, e.g productive software engineers.
There's no sense in conceptually elevating capital above labor, or vice versa, insofar as we're attempting to understand economic systems, as they're both just factors of production.
Now, that's not disagreement with Sam's point. Labor will decline in value as an input since AI will provide zero marginal cost competition to labor (effectively it'll be a massive increase in the supply of labor). More supply = less unique value = wages crater. Capital may slightly decline in value too, but at a slower rate than labor.
>horses don't create demand, but neither do unemployed humans.
What Sam is proposing is massive wealth redistribution by taxing land and capital. Seen any memes referencing $1400 lately? Unemployed humans who are given cash absolutely create demand.
Certainly, if you put wealth redistribution into the equation, then we don't have this argument anymore. But I'm arguing in the abstract sense outside of government intervention or structural modification.
Besides, massive wealth redistribution in the form of direct payments indiscriminately is not politically viable right now, but maybe this line of argument can help.
I will admit up front that my understanding of what causes inflation is, extremely limited, but it is not clear to me why.
Isn't one story of how some currencies get their start, that some monarch or someone wants their troops fed, and so they require (on threat of force) that their subjects pay a tax in coins that the monarch has printed, and distributes to their troops? If they did this continuously over time, say distributing only the same coins to their troops that they received as taxes, (or, keeping the amount in circulation constant, by some means), why would the amount of bread that each coin would tend to buy, decrease over the years?
(Of course, the monarch or whatever would also use some of the same coins for their own use, but they aren't being paid to do anything.)
Money supply and money circulation depend on market activity, you can't keep amount of money constant because there is no perfect competition in which demand and supply are equal.
There is nothing wrong with subsidizing citizens and/or companies but there should be limit because you can end up with inflation where you for example over subsidy citizens and therefore excess of money and demand drives prices of goods and services up.
Reflecting on your example Roman Empire for instance had huge military expenses and they started diluting coins in order to mint more coins but they ended up in inflationary spiral in which coins eventually lost value and their economy collapsed.
> you can't keep amount of money constant because there is no perfect competition in which demand and supply are equal.
I don't understand what you mean.
What are the supply and demand in question? supply of and demand for money? I've generally been confused by what people mean when they talk about the demand for money, especially when they talk about whether it is equal to the supply. The supply of money is a particular quantity of money (like, M0 or M1 or M2 or whatever), and so, if the question "is the demand for money equal to the supply of money?" to not have a type error, then the demand for money would have to also be a particular quantity of money. But, I don't understand what this quantity could mean. I understand that for each desirable item, there is a "how many [the type of item] would you be willing to forgo for [quantity of money]" which relates how much someone values money, their demand for it, in terms of each of the types of items, but I see no way to take these together to produce a single quantity with units of money.
Maybe add up how much someone would accept as payment in terms of each of their possessions individually? idgi.
__
I appreciate that chart and the info about Roman Empire coins. I of course recognize that increasing the amount of money the state spends, making it easier for people to obtain money, tends to make the money less valuable, but I think this would be largely counteracted by increasing the amount of money that the state requires that people return to it?
Though, there is of course a limit to how much you can pay for this way. If you were a sovereign and wanted to, idk, produce as many chicken eggs as you could, by increasing spending on chicken eggs while also increasing the amount of taxes you attempt to extract, you will eventually run up to a limit on how many chicken eggs you can get each year, and trying to buy more chicken eggs, would just result in either the price of chicken eggs increasing without increasing how many you get, or would just result in more people being unable to pay their tax obligations and being punished. Probably also a lot of people starving and such, because this would be forcing as much of the economy as you can towards producing chicken eggs? Kind of a horrifying situation actually. Would probably produce decreasing numbers of chicken eggs at some point. Well, much before this point, there would be a coup/coop. haha.
But, yeah, there's clearly a limit, and I suppose that this probably suggests a flaw in the way I initially thought of the idea, but I'm not where. Ok, yes, it does show that increasing how much you spend, even if you claw all of it back through taxes, will eventually buy smaller quantities of the thing you are spending on, per unit. In the chicken egg situation, how would the price of the cheapest sufficiently nutritious meal change? This isn't clear to me.
And I said money supply depends on market activity meaning only scenario where would amount of money be constant is perfect market competition(quantity demanded and quantity supplied are equal) but perfect market competition does not exist in reality.
It is also called Market equilibrium: "in this case is a condition where a market price is established through competition such that the amount of goods or services sought by buyers is equal to the amount of goods or services produced by sellers." [2]
Only if the money comes from the printing press, as opposed to from taxing the rich.
Also, without the fed printing money, we'd be feeling deflationary pressures, because we are in, you know, a recession. Some printing doesn't actually cause abnormal inflation in that situation.
All you have to do is look at the bell curve for IQ and see where this all leads, and its nowhere pretty.
A person with an 85 IQ or below is too dumb to even be in the Army, even doing the most basic of tasks. Turns out, a military force with advanced computerized weaponry requires smart people. Now here's the kicker... 85 and below is something like 16% of the population.
That's 1,250,000,000 people that will, in the coming years / decades, be totally unable to function in the economy in any worthwhile way. And I'm sorry, I just do not trust the "benevolence" of the human race to fix this problem when our technology advances to the point that people of 100 IQ and below aren't useful. There is no way that 50% of the people with 101 IQs and above are going to spend money to keep the other 50% of the human race alive.
I doubt there'll be "mass exterminations" like we've seen of ethnic groups throughout history, but more likely the unemployable will be left to their own devices; after all, out of sight, out of mind.
As higher and higher IQ individuals are keeping the same company more and more and self-segregating, I suspect we're going to see the development of an overclass of individuals that will eventually grow resentful of spending X percent of their income, whatever number "X" ends up being, for keeping those who aren't able to provide for themselves alive. That's a pretty common human emotion. The tribe works fine when every tribe member is even minimally capable. When half the tribe members are literally dead weight? Nothing good, I'm afraid.
More than 25% of people OF WORKING AGE aren't part of the labor force in almost all western economies [1]. And another 20-30%+ of the population is too old to be part of the labor force / retired [2].
This is well over 16% with too low of an IQ to contribute in a theoretical economy. It doesn't seem hard to shift priorities - the age for retirement, the people who qualify for welfare, etc...
Since 2015, the US and France already spend over 30% of GDP on social programs [3].
Well if you look at almost all people, they can't either.
Job retraining has an abysmal success rate. I don't think it's even 10%, last time I looked into it.
That's why all the "Learn 2 Code" shit that Huff Post and Vox and etc. so forth were spouting off to rural West Virginian coal miners was so stupid... if you can write, or can even learn to write, complex C++ code, why the fuck would you keep mining coal to begin with? Strangely, I didn't see much "#learntocode" when those 47 Huff Post workers got fired, many of whom had been there for 9-10 years.
As you pointed out there has been high demand for toilets even during recent years, and yet unemployment is a thing that still exists in the world.
Where are all the toilet-manufacturing and toilet-installing jobs? We certainly have the technology to produce billions of toilets if we as a society decided to. We know there are billions of humans who need toilets, why isn't this demand creating jobs? Where are the toilets?
A big part of the answer to that question is that the billions of people who need toilets can't afford them, because they don't have jobs that would enable them to pay for toilets.
Toilets are so easy to manufacture at our technological level that toilet factories don't require many workers. Certainly not billions of workers, but we have billions of humans who need toilets.
Maybe the same problem affects all products. Will technology advance until 10000 or 10 or zero human workers will be required to run all the factories in the entire world for every imaginable product? Where then will billions of unemployed humans get money to buy the products from those factories?
I think this is a good take- Having things be better for horses was never the point of the economy. In fact horses were probably the main driver for developing mechanical solutions since horse ownership is expensive and time consuming. There was no incentive to make life/jobs better for horses, and as soon as a better replacement was available horses were not necessary for most people.
So you cant swap out horses for humans as they analogy is completely flawed.
>Having things be better for horses was never the point of the economy
Neither was making things better for humans. The point of the economy is for capital to reproduce itself, humans being taken care of is an accidental side effect because capital doesn't like a torch mob very much.
>... humans being taken care of is an accidental side effect
No, in any non-totalitarian society, humans being taken care of is the main effect. In a market of buyers and sellers, it is only by making the buyer better off that the seller survives. As Smith said over 200 years ago, "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest."
Things only have economic value because humans want those things. Capital doesn't have any desires or drives on its own, owners of capital do. Capital owners are humans. Whatever you might say about capitalism, the idea that it doesn't serve humans or can somehow function without humans is just totally wrong. Capitalist economies aren't just human-centric, they are literally collections of human wants and needs.
>they are literally collections of human wants and needs.
And humans are literally just collections of cells, does that mean you have no wants and that your pancreas is running the show?
It's very arrogant to think that only humans have desires. Nations have desires, institutions have desires, and capital itself has a lot of desires. Lots of scary entities out there that have desires.
When capital is unhappy with capital owners it's prone to fire all of them, reconfigure entire industries, eliminate entire sectors of human activity, redraw nation-states and in the not so distant future probably reconfigure human bodies.
Of all the firms in human existence maybe a fraction of a percent are still around. You think the 'owners' of capital wanted that to happen? Capital keeps tagging along, individual actors, not so much. Give it another few decades and you may interact with a lot of firms that have no human working for it at all, and possibly not even one owning it.
Nation-state desires & institutional desires are human desires. Capital can't "fire" capital owners, that's just a non-sequitur, the very concept of capital ownership is both defined and enforced by human beings. Other capital owners and consumers, ie. humans, fire them. Yes, entire industries are reconfigured and nation-states are redrawn because some humans want them to be reconfigured.
This seems like a fundamental misunderstanding of what people mean when they refer to organizations as entities. One might group together the common interests of an organization as a whole for the sake of simplifying discussion, but that in no way implies that an organization is actually some kind of inhuman entity that functions without humans. When people say "the government wants X" or "Britain doesn't like Y", they don't mean it literally. Government is nothing without the human politicians and officials it's comprised of and/or the human citizens that recognize its authority. A hundred dollar bill is worth nothing without humans to want it and the things it can buy. Suggesting that capitalism can produce non-human economic value contradicts the most basic definition of economic value.
And no, I don't think only humans have desires, but the capitalist economy we have created is based on human desires. Horses clearly have desires too, but thus far they have not invented an economy to serve horse desires. It wasn't non-existent horse capitalism that replaced horses, it was human capitalism.
> Of all the firms in human existence maybe a fraction of a percent are still around. You think the 'owners' of capital wanted that to happen?
Yes actually, I don't know why you think that's somehow undesirable. A lot of firms fail to adapt to human desires & needs so humans drop them. I'm honestly confused at the implication that capital owners want a lot of "firms" to exist when they're only a means to an end.
>One might group together the common interests of an organization as a whole for the sake of simplifying discussion, but
the same applies to you. You're a system of competing interests which we call "themacguffinman" solely for the purpose of simplification. You're nothing but an organisation of cells, systems and drives and the interaction of a lot of parts that we slap a label on for practical purposes. When we say something like "Britain likes X" we actually should take it literally, because Britain is a very real thing. Just like your body sends immune cells to kill themselves to fend off a disease, Britain sends her soldiers to die for a quasi empty rock off the coast of Argentina. Britain has territory, but none of her citizens have one. Britain has interests that are older and different than the interests of any of her people.
The world actually makes a lot more sense when you let go of the very atavistic idea that humans have any sort of special ontological status or have some level of realness that other entities don't have, it's just systems all the way up and down.
People actually intuitively understand this when they discuss the so called 'AI alignment problem'. They just don't understand that all the AIs are already around and we're solving alignment problems with entities with non-human desires and interests all day already.
> Britain has interests that are older and different than the interests of any of her people.
Well no, Britain's interests are literally and technically and metaphorically and whatever-ly not older than the interests of any of her people. Britain did not exist in any way, shape, or form before any of her people. The humans of Britain sent other humans of Britain to die for the sake of humans of Britain. Britain doesn't have territory, the rulers and leaders of Britain have territory. Someone always benefits.
This is a fundamental misunderstanding of systems thinking. It can be useful to describe the collective interests of an organization as a whole system, but they're still human interests. Sending immune cells to kill themselves to fend off a disease is still in the interests of cells, just like the humans of Britain sending humans to die in a war is still in the interests of humans. And just like humans can't have interests without cells, organizations and economies can't have interests without humans.
I'm also not saying that humans are special or unique in the way that they can have interests, I'm saying that the capitalist economy we live in is defined by human interests and cannot somehow become non-human. Human interests will always be involved because that's how we define the system of capitalism & economy. Suggesting that our economy could somehow replace all humans makes as much sense to me as suggesting that a body of water could some day replace all of its H2O molecules.
I'm going to go ahead and say that of course increasing inequality caused by capitalism is a major concern for many humans which seems to be what you're getting at. This talk of Britain & capitalism functioning without humans is distractingly nonsensical. No, capitalism can't get rid of humans because a capitalist economy without humans is not a capitalist economy. No, the importance of horses is not comparable to the importance of humans to a system literally defined by interactions between humans.
> Yes, entire industries are reconfigured and nation-states are redrawn because some humans want them to be reconfigured.
I think you're missing the point that if necessary, the humans involved can be swapped out until ones with the 'right' desires are in place.
> but that in no way implies that an organization is actually some kind of inhuman entity that functions without humans.
An organization doesn't have to be capable of functioning without humans to be an inhuman entity with inhuman motivations and so on. The human components just need to be fungible and more easily steered by tweaking incentives etc. than the organization is.
For that matter, epiphenomena like regulatory capture really don't occur at an individual level much, but largely at the interface between governments/agencies and corporations, or even higher-level entities like industry trade groups and consortia.
But having things be better for humans was never the point of the economy either. The actual point has always been making things better for a subset of humans, everywhere throughout history.
I don't see why an unemployable horse would be considered any differently from an unemployable human to the cold profit motive.
Until true modern style democracy yes, but in modern democracies the people get to decide how the country and the economy are run. Yes, really. The fact that the consensus view happens not to align with what some people (perhaps yourself included) seem to think it should doesn't make that not true.
Sometimes that leads to really unfortunate outcomes (in my view) like Trump and Brexit, but those things were the result of genuine democratic processes and driven by the choices of individual voters. That's true generally in the democratic world too.
It may be convenient to blame the state of the world on shadowy anti-democratic forces running things from the shadows, because that can be used to de-legitimise the current state of the democratic world, but I'm afraid that's self-delusion. This is the world we have chosen through a process of consensus and compromise.
The difference is that if there are enough unemployable humans they can vote, rebel or otherwise enforce that they get a share of the profit. Horses did not have that option.
Incentive? not having to do hard labor? making more money?
I think that most technology has made life easier if not measurably better for humans. Medicine, machinery, transportation. Those things dont make life better for people? I suppose the cynical take could be that it is for the 'ruling class' but I think the measured take is that even the lower rungs of society have at least some access to better jobs than they did 100 years ago.
The lower rungs of society have access to better jobs because they were needed by the ruling class as a workforce, and because technology advanced. As a result of the above two, they were able to negotiate economic power and better working conditions.
What is the incentive for "wasting resources on the lower rungs of society" when you don't need them as workers anymore?
> So you cant swap out horses for humans as they analogy is completely flawed.
You kind of can though. The economy cares about no one, it cares only about capital. And as far as capital is concerned, humans are just fancy horses. Humans are expenses that need minimizing to the point of zeroing. In a perfectly efficient capitalist economy, no one pays any humans for anything ever.
Horses were not workers. The were tools workers used.
Farmers used horses to plow the fields. Transport workers used them to transport goods and people. As those workers got better tools to plow and transport, productivity increased, society got richer.
There are also millions of old telegraphs, slide rules, and rotary phones "unemployed" across the world. The fate of these no longer needed tools tells us nothing about the conditions for human workers.
Also, the fact that you can't find a human example to make this argument, but have to use a loyal, brave, and majestic animal instead, should be enough to hint that this is maybe a bit of emotional manipulation rather than a rational argument.
Guess what, 90% of humans are also tools that a small percentage of other humans use to accomplish their goals.
Few people would argue that the job of “Amazon warehouse worker” is anything other than a tool for Jeff Bezos to become richer and more powerful. The person who has that job, from when they clock in to when they clock out, is functionally equivalent to a horse.
Amazon software engineers are the same, they just make more money today.
By that same logic Jeff Bezos is just a tool I use to get some obscure math textbook written in English shipped to my apartment in Eastern Europe overnight.
The point is that a horse can only do a dramatically narrow functions. It's basically a big walking engine. Conversely Jeff Bezos can tend a garden, build a house, create art, write documents, package goods, negotiate contracts, recruit employees, organise a company, care for children, drive a car, take out the trash, cook diner, etc, etc.
Comparing people and horses in terms of jobs they can do, and their ability to adapt to new tasks is so unbelievably dumb it's staggering it even needs to be pointed out. This is why these arguments for automation imminently putting vast swathes of people permanently out of work has never come true, in the hundreds of years it's been predicted.
As BurningFrog says, please find an actual human analogy. You can argue until your face is blue that this applies to humans, but if so you should be able to find an actual example.
Meanwhile developed world economies are close to full employment and have many more vacant positions than unemployed people. We have a skills problem, not an unemployment problem.
An example would be “due to X, essentially all humans are unemployable, except as sources of entertainment for the rich, or as meat”. As this is also the conclusion, it can’t be used as an in-advance example for why the argument is or isn’t reasonable. (Inverse begging the question, I think? https://en.wikipedia.org/wiki/Begging_the_question)
I’m saying “if so you should be able to find an actual example.” is false.
CGP Grey argued (paraphrased) “if X happens in the future, Y will be like Z”, you are arguing that Y will never be like Z in the future because Y is not currently like Z.
But X has happened many times, and Y has never yet become like Z. Automation has replaced workers on massive, ginormous, gigantuan scales. Factories have replaced manual workers, then more efficient factories have replaced those factories, and then that's happened again several times over. Yet still we have near full employment and many, many more open positions than we have unemployed.
How long are we supposed to wait for this prediction to come true, and how many times does this have to be disproved across how many economies?
X is “AGI invented”. Not special purpose AI, not automation, AGI — the G stands for “general”. If this has ever happened, it wasn’t on Earth (unless you count evolution producing us, in which case the example is “Neanderthals are all dead”).
I see. I don’t know how to rephrase it to be better, but I’ll have a go:
Horses represent the importance of muscle power in the economy, which was made vanishing small by mechanisation; AGI would do the same for intellectual labour.
But to do what? If the economic future this piece presents is right then AI will be able to do everything that humans do.
The difference between this and previous tech improvements is that this one replaces human thought, not manual labor. That’s fundamentally different than everything that’s come before.
People have choices. They can work for another company, they can start their own business, they can go live in the forest and live off the land. People can do whatever they want.
If they want to get money and work at a specific company, don't claim they are slaves. Amazon makes an offer: hey, if you work here you get X amount, and they accept.
You can claim an employee is a tool for the company, but you can also claim a company is a money generator for an employee. It's a work-money trade between 2 parties. Don't claim it's slavery.
Spend as little as possible and use all your extra cash to buy stocks. Not meme stocks of course but either an index fund or if you really trust your skill go ahead and pick stocks.
If there is a wealth transfer you will be at least partially on the receiving end.
More often than not its "take this abusive offer or starve to death". Not much of a choice there.
The way I see it, what you're describing is an outdated ideology, which does not apply in practice in most parts of the world, except for a few privileged places.
Come on, this is a very narrow view of things. There are a lot of people that can't just quit their job and start their own business, because well, they don't have any savings or access to credit. The work-money trade could be very very unequal.
I don't see the distinction. Aren't workers also tools used to produce things? What's special about workers that makes them irreplaceable?
> Also, the fact that you can't find a human example to make this argument
What would you count as a single human example of human workers becoming obsolete? This has happened many times. About 96% of people used to work in agriculture. Today it's about 4%. Machines and technology have replaced the rest of the labor needed to grow our food.
But the food needs (and other needs) have expanded 10x. It makes no sense to look at the absolute numbers instead of the percentage. If there had been no technological advancements at all, the workforce would have still expanded (in absolute numbers) to match the population.
Without machines and technology, we would need far, far more people today in agriculture. The need for all those jobs has been destroyed by technology.
When one job becomes obsolete, this frees up labour and people gradually move to other jobs. This increases production, since we're still producing agricultural products (using fewer people), and there's more people now producing other things. Agricultural jobs were destroyed, but there were always other jobs people could do, that weren't done before simply because agriculture sucked up so much labour.
This pattern will continue for a while. When human labour is no longer the best tool for one job, that labour will be reallocated to something else. There's still plenty of things we'd like more of.
But the video is talking about what happens when humans are no longer the best tool for any job. It's happened many times in history that some job has become obsolete. We haven't yet made a machine that's better than humans at every job.
One way to look at this is to compare the basic capabilities of humans and machines. The human list remains roughly constant, but the machine list increases over time.
Actually, what's happening in automation is not what people expected. We still don't have good robot manipulation in unstructured situations. Machine learning gave us automation of "higher" functions first.
The current result is a growth in really dumb jobs, like Amazon warehouse pickers. The computers do almost all the thinking. Humans just pick up things where told to do so and put them somewhere else. Such jobs have no promotion path.
This is the "machines should think, people should work" revolution.
“World population then was ~800 million. Today it is ~7700 million. Most have jobs.”
I doubt this.
In the USA, we have an unemployment number, “able bodied, but not working, or looking for work.” If the 50’s, it was basically housewives. Today it’s a lot of people. (This might not be the exact wording the government uses.)
My point is this number is huge, and never talked about much in the United States. I don’t think the government even has a way of tracking the number?
I don’t know how the rest of the world treats this number.
My guess would be most don’t have jobs, or if that number crossed 50% in any country, the jobs would not be of a livable wage living alone.
I don’t know how the rest of the world treats this number.
India deliberately chose not to automate agriculture. They still have about 50% agricultural workers. Recent attempts to change this have resulted in riots.[1]
This is moving the goalpost, though. The discussion was about whether or not the replacibility of generalized human labor is comparable to the replacibility of horse labor. No doubt there can be reasonable disagreement, but we can't compare the adaptation to specialized function replacement with generalized function replacement. The point of the analogy is that it (generlized labor) is novel in human populations, but not for horses.
I think that the bigger flaw in the analogy is that horse reproduction is mostly controlled by humans in ways which (to say the least) would not be considered acceptable for humans to do to other humans. Yes, it might be true that most individual horses had more comfortable jobs during that time period, but they also got to reproduce a lot less (it’s also probably uncomfortable for horses to not reproduce, but I’m even ignoring that for now).
The point is that we found new jobs for horses because there were areas where machines were inferior to them. Once machines could do everything that a horse could do (probably some mixture of load carrying and agility / endurance) we had no more work for them.
To bring the same logic to humans, in history machines freed up humans to do better work than before. Let's say what humans excel at is dexterity and thinking. Once machines are more dexterous and can think better than people, there's no more new jobs - because whatever jobs they would be, machines will be better qualified for.
The number of jobs that horses can do is a rounding error compared to the number of jobs that humans can do. If we're living in an imaginary future where machines can be trained to 90% of jobs, sure. But google a list of "most common jobs" and see how many a computer could do fully.
Even if you think a miracle ai revolution is on the horizon, humans and horses are not comparable in terms of diversity of labor
> Even if you think a miracle ai revolution is on the horizon, humans and horses are not comparable in terms of diversity of labor
As someone who does expect an AI revolution, I think you’re being overly literal with an analogy.
For arguments’ sake, a toy model of AGI: copy a human brain onto a substrate that costs less to run than a real human costs to feed [0]. For every intellectual task this human could have performed, it is now cheaper to have the AI do it. As this AI could be based on any human by the nature of the hypothetical, any human intellectual task is now done cheaper by the corresponding AGI.
Current robot bodies (for the hypothetical uploaded minds) are currently a bit meh, but (a) not so much as to make the idea ridiculous, and (b) we’ve already got BCIs demonstrated in animals, so an AGI puppeting a fast growing cheap animal is also entirely within the realm of the plausible if you don’t want robots and don’t care much about animal rights (though not much worse in that regard than current meat production, IMO).
As an aside: tasks is a better metric than jobs, as almost all jobs have many tasks some of which are easier to automate than others.
> The point is that we found new jobs for horses because...
My core point is that horses didn't have jobs!
People used horses to do jobs.
> Once machines are more dexterous and can think better than people, there's no more new jobs - because whatever jobs they would be, machines will be better qualified for.
I think this is another version of the same error. Machines have jobs in this scenario! People have to pay for what the machines produce, but there is no human on the other side of the transaction, so after a while humans run out of all money and die off.
Or something.. It's getting late and I'll stop here.
If I order food delivered, I and the restaurant are using a human to do a job. If that job can be done by a drone instead, that human has to find something else to do for income. For better or worse, the labor market is a process for turning other humans into tools.
The fact that the horse is paid in oats and not wages is irrelevant: a socially useful niche is transferred from a biological creature to a machine. It's an open question whether the future will contain enough socially useful niches for every working-age human (at least, ones that can pay a subsistence wage).
It is not misleading at all. You cannot find a human example because there is none yet, just as there was none for horses right until we found a way to replace almost all their production with machines. There is likely nothing magic protecting human function from exactly the same fate. We can argue about the time scale, but it seems clear we are headed that way.
I agree that the "new jobs are generated automatically" claim isn't supported by any good argument.
But the horse analogy isn't good way to show this imo. I hate to be "specie-ist" but human beings have a remarkable ability to learn new skills and attain new abilities. Horses don't. The human consumption of human's time, attention and so-forth is at the center of the economy. Human consumption of horse time isn't all that high a priority.
Human flexibility makes human activity something that can satisfy a vast number of human wants (and it means that there are a vast number of those wants to satisfy). That still doesn't mean you'll always have people employed but it makes employment a more plausible thing.
Yes, intuitively, over the last 50 years, the world population increased substantially, meanwhile, global poverty decreased manifold (90%?) over the last 50 years, and the vast majority of humanity is employed doing 'something'. Even with the greatly increased productivity of automation in farming and manufacturing, humans are creating new industries that build on what's been developed in the past.
What are the trends that will lead to mass unemployment, even decades hence?
> What are the trends that will lead to mass unemployment, even decades hence?
Visit a slum in Mumbai or a village in the Congo. Better yet, visit a trailer park in the Rust Belt. I really don't get why people don't understand how the future for most people - who don't manage to accrue capital prior to the substantial advent of AI and automation - is already out there for us to observe.
Another way to think of this: imagine two equally-sized countries A and B. A implements Sam’s suggestion, taxing everything at 2.5% pa, B doesn’t.
In A, people are happy, we’ll fed, Pursuing their interests, living meaningful lives.
In B they are not.
however, in A, companies and the economy are not growing so fast: resources are funnelled into a populace that takes but doesn’t give back.
Over a long period of time, the extra 2.5% growth in country B will become so meaningful that it will look back on the savages in country A and decide that it’s resources would be better off under B’s management.
The 2.5% tax doesn't mean country B is growing 2.5% faster. The 2.5% in country B could be sitting in a vault doing nothing or being spent on yachts. Conversely the 2.5% in country A is being invested in making the workforce more productive.
> In general it is right to assume that free capital will be used as productively as possible.
Only if you define 'productive' as 'generating more capital'.
Yes, over the very very long term, externalities that aren't accounted for (it doesn't matter whether we're talking about negative externalities not mitigated, or positive externalities not invested in) will eventually exact their toll on capital, but in the short, medium, and even moderately long term, any externalities that can possibly be dismissed as irrelevant to the fundamental mission of turning large piles of money into larger piles, will be.
It is also worth noting that by ignoring externalities until some crisis forces the issue, opportunities are opened up for capital to (quite profitably) address the resulting crisis. Whereas addressing an externality in a more timely manner, while reducing overall costs, even more sharply reduces those opportunities.
All of those things are true, but they are things you believe after you have evidence to believe them. That you should assume free capital is being used in the most productive way possible is the null hypothesis and something generally recognized as true, despite how much HN doesn't like it.
If country B in that case has extreme inequality it may actually struggle with economic growth as there isn’t enough purchasing power among the poor to drive consumption which generally is what drives corporate profits.
In that case it could be that country A outgrows country B because the restributive policies have a stimulus effect.
The thing is, we don’t know how fast society-wide experiments will turn out.
In the US, this is where federalism is advantageous. Allow different states to try different policies and then measure the outcomes. Over time, winning policies will emerge and spread across the country.
Could, say, California tax corporations net worth at a percentage of the revenue derived from California? (This is a downward extrapolation from the original article's proposal to deal with assets hidden overseas.) I'm queasy about the current SCOTUS's assent.
As technology advances, more "wealth" can be created with fewer inputs, of course.
Greater wealth means a bigger "pie" to split. Inequality obviously comes into question here, but objectively there is more total wealth that can be spread around, so it's a net good, if we can optimize the distribution.
It also means that the contribution of an individual can create more value. In some ways this leads to inequality, where the most capable people are able to capitalize on technology and make a lot more money than was possible in the past... see: any of the FAANG founders.
Taking the thought experiment to a logical extreme: in a hypothetical world where 100% of the supply chain is automated, people would no longer be required to work. So it must be true that there's some level of automation that will produce a net decrease in jobs. At that point some form of UBI likely makes sense.
Though I'm doubtful we'll ever reach that kind of state. Realistically people will continue to divert their efforts into new things.
I don't necessarily disagree but that's really not a good analogy. Horses aren't people. When their jobs disappear no one gives a fuck. No one speaks up for their interests. People will not go so easy. They are able to create amazing things if given the opportunity. No one wants mass human unemployment.
Pre-covid, Global wages and employment levels have never been higher. Global poverty levels have never been lower. Either way, I don't disagree that AI will replace some jobs, I'm just not sold on the horse story.
and? We still have people dying of starvation world wide. We have scads of homeless in the richest country on earth. Why do you think anyone will care at all?
You have to be willfully ignorant to believe a proposition that no one cares, and willfully naive to believe everyone will.
Look at https://ourworldindata.org/economic-growth and take 5 minutes to look at the data on the rest of the site and you will understand how much life has improved not just in raw numbers (there are more people living above the poverty line than existed 100 years ago), but more importantly in percentage terms - a far smaller fraction of people live in poverty today than in 1880.
In fact, the number last time I ran it was that less people live in abject poverty in 2019 than did so in 1880 - despite a 7-8 fold increase in population.
People have an advantage that horses don't have: people can own property, and they have legal rights, like the ability to vote in elections. When we're talking about AI, though, it's not too hard to imagine computer programs eventually being allowed to legally own things and accrue wealth. It's harder to imagine them being allowed to vote (how would that even work?) but something like it could happen. We can also imagine this going the other way: economically "useless" people losing the right to own property or the right to vote. Or if the rights continue to exist in theory, in practice they become poor and disenfranchised. And then "the strong do what they will and the weak suffer what they must." It happens all the time.
I think in the long run we need to figure out a reasonable economic system that can deal with abundance of the things people want/need, and scarcity of work. Maybe it's as simple as making a 30, or a 20 hour work week the norm. The default alternative seems to be to have more and more people fighting for fewer and fewer jobs, to the point where everyone who has a job has to work extremely hard under long hours and bad conditions for low pay because if they don't there's ten more people ready to take their place at the drop of a hat.
It's fine to reply to an allegory with "you're right, that solution doesn't work, but it doesn't preclude other solutions because of these reasons, so don't use it to make decisions".
The comment directly says it's not a good analogy. Complaining that they didn't specifically say "allegory" is really nitpicky. The terms are very close and overlap.
Depends on what reason they gave. If they misunderstand something about humanity, that's a great jumping point toward discussion.
But if they said "yes, those politics would happen in humans, but some of these other factors are more important toward the eventual outcome[...]" then that's perfectly compatible with understanding the story. And that's roughly what the earlier commenter did! They accepted the argument that jobs will go away, while saying the secondary and tertiary effects would be different.
Animal farm is a fictional story about humans disguised as animals. The horse allegory is a non-fictional story about the downfall of horses which are very much not human-like.
The horse allegory is a fictional story involving talking horses. It is hardly any more fictional than a symbolic retelling of communism spreading. Both are intended to prove a point, not accurately portray the details of history.
I so agree that increasing automation doesn't always mean more Jobs That Make Money (even if it might continue to for a while yet). Maybe when there's less work for humans to do we just... work less? and get more?
You can do it right now — become a contractor or a consultant.
You will immediately notice that nobody likes it when work is done slowly. Working for 4 hours a day is only possible if you complete a paid portion of work at that time (e.g. teach 2-3 classes). Nearly any project that has work for multiple days would benefit from people working on it 24/7 and completing it faster, but it's often too expensive.
So your less work schedule often becomes a high-energy consulting gig when you work 12 hours a day, and then a couple months of leisurely coasting.
I have no reason to think that the future would be materially different. Just finding the gigs is going to become harder. This very future has already arrived for many low-skilled workers, like those in Amazon warehouses; they never know if there will be demand for their service today, and need to apply daily.
Yes, some people will get screwed end up like those out-of-work horses because of AI or automation. Other people will benefit greatly. On average, most people will benefit a little bit. This is how every technological advance has worked for the last few hundred years. The promise of higher efficiency == higher standard of living is a general promise for society as a whole, not for any particular individual.
As long as this is like a retired horse given support to last out its years in a nice pasture (i.e. UBI) and not the dog food route, we'll be ok. If the change comes too fast leaving too many without too much while too few have it all, then we'll have problems.
Our goal shouldn’t be to have jobs but rather no one having to go to work at all. That’s the age of abundance that is possible through AI and robotics at some point. Our goal should be 100% unemployment for humans. We should be focusing on art, outdoors and other creative pursuits.
Future generations will be surprised we lived so that we can have “job”.
Horses didn't have the vote. Unemployment is not a technological problem. It's a political problem, and in fact, one that shows the causal arrow runs the other way. Scarce labor spurs innovation, but when labor is cheap, businesses just sweat the workers instead of building new machines.
Horses could not be retrained to do jobs other than those replaced by machines.
Horses did not have voting power.
A large number of unemployed horses would not go on robbing and killing humans in a mob.
And decrease in human population is welcome until some point. (We have relatively more educated people having less / no children, and backward, dumb stupid people having more children, which is BAD).
Horse population peaked? So what? That assumes the population number is an indicator on their quality of life. However, as the allegory indicates, a horse's quality of life today is leagues better than what it once was. Better feed, better health care. A good barn is a harder to come by, but what "good barn" means is a raising bar by the day. Pretty good odds of making it into your late 20's.
There's fewer horses now, but it's not like they were taken en masse out back and shot in the head (though that certainly happened to some). They just aren't bred as much. I don't think anyone (horses included) bemoan that.
If horses are the comparison, then I don't think humans have a thing to worry about.
That logic only works because we have no issue with slaughtering horses en masse. It'll take 100+ years of economic deprivation for the human population to balance out.
Some humans have issue slaughtering horses en masse. But still, I don't follow your point. Horses WERE NOT slaughtered. They were just bred at a point lower than replacement.
That CGP Grey video is 6+ years old already and none of its claims have come to pass. That video and this blog post both seem like the usual techno-utopian fluff -- a sermon to the already converted.
Here's the truth: life expectancy is going down, birth rates are below replacement, sperm counts are down something like 90% in less than a century, we're working longer hours than medieval serfs, and suffering from more disease than our hunter-gatherer ancestors.
This Techno-religion of AI, genetic engineering, automation, etc being "right around the corner" is the opiate of the masses -- the propaganda keeping the working class from revolting.
For me, it’s enough to consider the state of our hearts to realize how much more we need before this utopian vision comes to be. We may be extremely educated and creative but we still walk and drive by homeless people in the wealthiest country in the world. We still work for and consume the products of companies see nothing wrong with profiting from things universally obscene. Compared to our minds — our hearts have made little progress.
As prosperity increases, growth booms but then predictably falls below replacement rate. So humans kind of do this on their own if you can get through the boom period without breaking things too badly.
Maybe I don't understand the question, but you can see that in population statistics - certain populations reproduce at an above replacement rate, some below. In a lot of the developing world, however, and no different 150 years ago in the developed world, you'd have a lot of kids because you'd expect several to die early. Once a certain level of prosperity and peace is reached, this doesn't become as much of a biological imperative.
I am aware that some countries reproduce above replacement rate and some below, and that this rate is normally tied with country economic prosperity.
My point was an evolutionary one: wouldn't those couples who choose to reproduce more (assuming you can afford to raise your kids) be selected for, evolutionarily, vs couples who chose to remain childfree?
Thus, even in a few generations, I would assume a moderate cultural shift towards having kids, since now only those who want kids, have them.
The first hormonal birth control method (The Pill) was invented in 1960; we're 60 years beyond that (~3 generations). Maybe my theory isn't panning out as fast as I assumed.
> My point was an evolutionary one: wouldn't those couples who choose to reproduce more (assuming you can afford to raise your kids) be selected for, evolutionarily, vs couples who chose to remain childfree?
Keep in mind that the incentive to have kids, evolutionarily speaking, is weak enough that we have had to be bribed with sex to keep things going.
Once sex and reproduction were decoupled, you're definitely looking at much more than a couple of generations before biological selection pressures start having the effect of an increase in the desire to have children per-se, as distinct from the desire to have sex.
OTOH, cultural incentives are an entirely different matter: whether we're talking about religions decrying or forbidding the use of contraceptives, or advocacy for those who want children at all to have larger families, these can have a snowballing effect especially since culture is passed down to offspring even more readily, though not as reliably, than genetic traits (eg. you can pass your culture on to an adopted child, or to a spouse, etc.).
This selects for cultures that are good at spreading themselves and good at encouraging reproduction, and selects humans for a general tendency toward susceptibility to indoctrination (at least during childhood) including the imperative to eventually pass the cultural package onward. To a certain extent it also selects for acceptance of authority and resistance to indoctrination with beliefs that conflict with already held ones as an adult.
Actually, all these particular biological traits are selected for and reinforced by so many other different feedback loops, that cultural packages can mostly just assume their presence and focus on leveraging them.
In the next five years, computer programs that can think will read legal documents and give medical advice.
Aside from the other points, taking "AI" as it exists in it's present form (deep neural networks and related) as specifically the bringer of unlimited wealth certainly puffs up the various "AI companies" notably OpenAI (It should be noted that OpenAI's most famous product, GPT-3, can generate strings that sound a lot like legal or medical advice but it so far "demonstrates non-understanding on a regular basis". Don't follow it's advice to kill yourself, for example).
It really should be said that deep learning, in particular, is still just one technology that's very good at some things, kind of impressive but not functional at other things, and just unable to do other things (actual understanding of biology, for example, seems well beyond them). I don't think this situation has changed since deep learning began it's hype cycle (which isn't to say it's "nothing", it just doesn't seem like to bring us "everything", a scenario the article literally sketches).
Automation has proceeded apace, automation in general has brought us enough resources right now to give minimal comfort to most people in the planet (as people have noted).
But automation has generally succeeded in situations where everything is controlled - ie, factories. Self-driving cars are forever five years away given the 5% or 1% or whatever level of unpredictable variable involved. Progress on robots that can interact well with either humans or "the messy real world" even in very limited terms has been painfully slow and I expect this to continue.
The scenario of AI mostly replacing people like doctors and lawyers involves bizarre paradoxes beyond whether deep learning "AI" works as advertised. Suppose you can train an "AI" to read legal papers or diagnose patients based on X-rays. That training is done from the data of real life lawyers and doctors actions. Suppose, best case scenario (very unrealistic imo btw), you have a complete "snapshot" of the behavior of lawyers and doctors in a given year. The problem is reality changes, you need new lawyering and doctoring behaviors after N years. Doctors need interpret new maladies, lawyers need to cite new decisions and both need to interpret new language forms that appear. But if you've actually removed the real lawyers and doctors, where would you get the new training data?
The only way you could be beguiled by this framing is if you don't understand just how inept most doctors and lawyers really are.
A future configuration will probably look something like: far fewer highly talented doctors and lawyers remain employable while the rest are replaced by AI that's shown to be vastly more capable, and that is continually enhanced by the encoded expertise of said highly talented remaining specialists.
You don't understand just how inept most doctors and lawyers really are.
It doesn't matter how competent or incompetent whatever professional might be. The only thing a deep learning application is going to do is duplicate their behavior. Deep learning involves no "thinking" at all. Just a very elaborate, brute force curve fitting. If the doctors are on average "incompetent" so will be the deep learning app (ie, you kind fall for the sort of "since it's a machine, it will be accurate" fallacy that makes people want to trust self-driving cars)
A future configuration will probably look something like: far fewer highly talented doctors and lawyers.
If you really automated the work of lawyers and doctors with explicit, maybe. BUT that isn't how "deep learning" work. Deep learning just uses data and the problem is you need sufficient data, a sufficiently large corpus of data to show by many, many examples what the thing should do.
Oddly enough, your scenario of high expert adding their expertise to the system is much more like the original Gofai model where a few experts would hypothetically program in their expertise. That scenario fell with difficulty of expertise programming. The present systems can't work that way.
> fallacy that makes people want to trust self-driving cars
A lot of successful L4+ autonomous vehicles today, contrary to what the press releases want you to believe, are architectured first and foremost as non-learning (i.e. traditional robotics) systems, with relatively well-defined domain-specific sub-problems carved out and delegated to learning-based methods (e.g. recognizing all cars/humans/signs/... in images captured by the vehicle's cameras). These problems tend to have well-defined metrics and massive real-world data sets backing them up, and are increasingly more common to report how confident they are in the provided results.
ADVs have come a long way despite all the doubt, and the top players are finally getting confident in removing the human from the driver's seat. This is not trivial in the post-Uber-ADV-fatal-accident world.
> Oddly enough, your scenario of high expert adding their expertise to the system is much more like the original Gofai model where a few experts would hypothetically program in their expertise. That scenario fell with difficulty of expertise programming. The present systems can't work that way.
Taking a generalized approach will fail. Taking a tailored, domain-specific one that incrementally carves out use cases will be the basis of future success in these spaces.
the opposite if this is true. with scifi ai on that level, bad lawyers would just pay to use the ai for legal work and focus on developing social relationships to get valuable clients. this is how a lot of professionals spend their time now
most things in life that make a lot of money are social. if everyone has access to ai knowledge and skills that can be copied, then the social aspect is even more important
Some of it is just framing. You can replace a lot of what doctors do with regular tech and plain old organization. You could have done it for decades. Problem is regulations put a lot of limitations on who can perform what service. You can get the best well-informed advice from wherever, but you still need to go to a doctor for your prescription.
I could very well see a situation where "AI" is what finally gets regulators to loosen up, but what actually gets implemented will be traditional stuff that works. It may take a "surgery robot" to allow greater freedom in training people to perform surgeries.
Won't be surprised if in 20 years self driving cars will indeed come to dominate. But most "AI" will get dropped and roads will be retrofitted with something dead simple that assists cars in navigating (I believe that idea goes back to 50s or 60s too).
I assume the author is using AI to mean software in its broader sense, not just a machine learning algorithm.
Somebody will eventually write and algorithm that actually understands the laws as they are written, the case law that interpret them, and can read and write contracts. The algorithm might not have anything to do with ML.
I would like to think one day we'll actually understand how our bodies work and can diagnose problems as if we were debugging software rather than stabbing into the dark with drugs.
In 1995, a self driving car drove 98% of the way across the country. Think what these same people predicting AI today would have predicted in 1995. They would probably believe we were 10 years away from self driving cars in every household. We still dont have a mass produced level 3 system in 2021.
At least a couple of times in the article the author talks about software "that can think". Deep neural nets can't "think" at least not in the sense that humans can think. I don't think we have any software yet that's close to "thinking".
> If everyone owns a slice of American value creation, everyone will want America to do better: collective equity in innovation and in the success of the country will align our incentives.
I kind of doubt that. At Google we're paid in part with shares of GOOG, but at Google's scale that's just treated as cash compensation. At my level, nothing I do affects the stock price, and most Googlers feel this way.
Sure, I want Google to do well, and I want America to do well too. Both of them doing well benefits me. But it doesn't really encourage me to do something different day-to-day.
You never know. Butterfly effect and all that. It adds up. Best example I know of is when I was younger and playing WoW, my brother was explaining gemming your gear to me. I told him "what will a +4 intelligence gem do really?" But you add up all the gems on all the sockets on the gear and it makes a huge difference. The difference being the difference between a strong character and a weak one and living or dying. Each you in google is a potential gem in the system. Or you can be an empty socket. Add all those up and it does have a huge effect on the outcome of Google (and the stock price).
I apply this theory to many diverse subjects- voting, finances, human health and car maintenance (once one system is suboptimal or impaired, others often follow). Keep your sockets gemmed. :)
>You never know. Butterfly effect and all that. It adds up.
The butterfly effect affects most no physical systems, which contain incredible amounts of damping processes. The same thing happens with people - if a zillion of them want things that point in somewhat different directions, the net does not add up, it cancels.
Otherwise most physical systems would simply explode to infinities, but in practice they don't. They dissipate and become less useful.
So nicely said.
I read a quote somewhere about ancient big buildings... Like stonemasons that were crafting stones had to imagine each stone beating really important in the bigger picture.
The quote said it better than me here.
Feeling basically neutral towards your employer (or nation) is probably how having reasonably aligned incentives feels at scale. If you were an hourly minimum wage worker for a large corporation, there's a good chance you'd spend a good part of your workday in a blind, passive-aggressive rage.
> democracy can become antagonistic as people seek to vote money away from each other.
...
> We should therefore focus on taxing capital rather than labor
I find it amazing that these "thinkers" don't see the irony. You are "afraid" that people are going to vote you away from your wealth via democracy; but then you feel privileged to decide how this wealth should be taxed. Since AIs are getting smarter than humans, maybe we should let them decide?
> and AI teachers that can diagnose and explain exactly what a student doesn’t understand.
Since AIs are smarter than humans, why bother teaching humans at all? Unless there is a point where a developed human brain could outsmart the AI, it seems to be a waste and emotionally guided.
> We could do something called the American Equity Fund.
Any solution that is not universal is bound the fail. These companies can move overseas to cut their tax bill. It also easier than moving now since all they need to move is data; electronics being already made in Asia.
The reality is, there is no place for humans in an AI driven world where AI is smarter than humans and robots can make stuff. Realizing that is the first step to move forward in this new world.
> Realizing that is the first step to move forward in this new world.
Right, we've become disposable. Up until the computer era we were part of the equation. We went to war, to vote, and we paid taxes - making ourselves useful for state and capital.
Now, they don't need us anymore. We are a burden for the state (universal basic income) and capital plays its own game (high frequency trading).
Samuel Butler was right in Erewhon (1872): "Advertising is the way we grant power to the machine"
AI is powered by advertising, by the data we voluntarily feed into the machine. For the false reward of shining the ego.
Yes, we must admit, as a specie we are highly disappointing. Instead of lifting ourselves we've created a monster above us.
> Since AIs are smarter than humans, why bother teaching humans at all? Unless there is a point where a developed human brain could outsmart the AI, it seems to be a waste and emotionally guided.
The obscurity for the definition of AI is precisely what drives this. The term AI is so vague among tech circles that I find the term basically laughable at this point.
If we're talking about sentient AI, then none of these concepts matter. We'll have WAY more interesting problems to deal with.
But in The Culture series humans have no meaningful role and everything that humans can do machines can do better. Some extreme edge cases may apply to very specific things, but even then, it's one or two individual humans being useful contributors out of countless trillions.
I agree that The Culture is the science fiction future to aspire to - but the place for humans in The Culture is just enjoying how good society is once machines and AI do everything for you. Importantly, the AI's of The Culture like humans and want to promote human flourishing.
If human+ level AGI is achieved then I think it does have to work out the way the parent is stating. There will be no useful role for the vast majority of humans. Humans are intelligence plus the physical capabilities of our bodies. Machines can already exceed our physical capabilities in most things, and if artificial intelligence exceeds ours - what will be left for humans?
If you want to follow that chain of logic, then the thing to do would be to either find a way to prove that a sufficiently AI would be conscious, or to augment human consciousness with AI. Because even if AI's are superior to humans at everything, if they're not self-aware, you still need humans or other conscious animals for anything to have a point - you need someone to experience the stuff that exist. And humans aren't built to be happy existing simply as consumers, so if humans are going to be those experiencers, they need to be involved in creation, not just consumption.
I have decided to replace AI (and robots) with Orcs in your comment.
> We should therefore focus on taxing capital rather than labor
I find it amazing that these "thinkers" don't see the irony. You are "afraid" that people are going to vote you away from your wealth via democracy; but then you feel privileged to decide how this wealth should be taxed. Since Orcs are getting smarter than humans, maybe we should let them decide?
> and Orc teachers that can diagnose and explain exactly what a student doesn’t understand.
Since Orcs are smarter than humans, why bother teaching humans at all? Unless there is a point where a developed human brain could outsmart the Orcs, it seems to be a waste and emotionally guided.
> We could do something called the American Equity Fund.
Any solution that is not universal is bound the fail. These companies can move overseas to cut their tax bill. It also easier than moving now since all they need to move is data; electronics being already made in Asia.
The reality is, there is no place for humans in an Orc driven world where Orcs are smarter than humans and Orcs can make stuff. Realizing that is the first step to move forward in this new world.
Now try replacing humans with your nationality (e.g. Americans) and Orcs with people of a different nationality from yours to get even closer to reality.
> Since AIs are smarter than humans, why bother teaching humans at all? Unless there is a point where a developed human brain could outsmart the AI, it seems to be a waste and emotionally guided.
taking this point and running off with it. why bother having humans at all? let's all become a simulation and live within a computer.
As humans, we emotionally feel sympathy toward other animals of the human race. Bonus points if they look similar (ethnically).
> move forward
Without honestly realizing what an AI driven world means, any "solution" is just vaporware talks, and will probably mean we are not ready when the shift happens.
> The reality is, there is no place for humans in an AI driven world where AI is smarter than humans and robots can make stuff.
Luckily, that's a hell of a lot further away than AI bulls indicate. Can't even use AI to help with hiring people without it getting cancelled as racist, after all.
Maybe in the Hacker News echo chamber this all sounds plausible.
But where I live in the real world all I see is software getting worse and worse. It doesn't do ANYTHING automatically anymore and can't be scripted, every 'app' is a silo and has to be constantly tended to by humans. Useful functionality is actually removed and more time is spent on UI visuals than making it usable. This is made worse by the tendency of surveillance software to require humans to interact with it constantly in order to harvest information, interactions or to display ads.
At some point this bubble will burst, the ridiculous tech valuations will crash and we'll be back looking for solutions to real problems again.
I think you are on to something. But I don't think the day X will ever come. It is more like the divide between the best software and the average is going to grow and grow.
We will see the amplitude of (critical) failures increase further and their occurrence frequency rising as well. It is and will continue to be evaluated statistically: The cost of actually fixing them vs. the cost of just letting them run wild.
I don't buy this whole idea of technological progress being exponential, let alone monotonic.
The average piece of software made in 2021 does less, runs slower and is more difficult to maintain than the average piece of software from 2001.
Consumer-grade CPUs haven't seen a drastic increase in performance for around a decade too. I'm still running an i7-2600k that was released in 2010. A current-tech Ryzen 7 has maybe 30-50% better performance per core, and 2x the number of cores. In 2000, we were still using 800Mhz single-core Athlons.
More broadly, we haven't seemed to have a breakthrough on the scale of penicillin, the steam engine, the assembly line or the transistor since the turn of the millennium (with the possible exception of CRISPR).
I just don't seem to be able to share in the optimism of people like Altman and Kurzweil when it comes to technological progress like this.
I don't know about the average software, but my primary tools, Unity 3D and Blender are both significantly more productive than what I was using in 2001. (Was in the same industry).
I think a lot of software is just "done" and we see a lot of churn for no real reason. Office software like email, word, and excel are just moving buttons around at this point.
That doesn't mean ALL technology progress is exponential, people like Altman or Kurzweil don't make that claim either. But as more things become more digitised they will benefit from Moore's law more and more and progress might become exponential there also due to their dependence on computing power.
> Software efficiency seems to be exponential in many places as well
No, in one place. That single example gets touted again and again, but it's just a single class of algorithms, and much of the cited performance gains are from a small set of well-known example problems. (I took Professor Groetschel's linear optimisation class at university).
In compilers, for example, progress is much more "majestic": Proebsting's law [1] was shown to be wildly optimistic even back in 2001 [2]: "The reality is somewhat grimmer than Proebsting initially supposed." More recently, it seems like progress has pretty much stalled for most applications, for example: "Using Stabilizer, we find that, across the SPEC CPU2006 benchmark suite, the effect of the -O3 optimization level versus -O2 is indistinguishable from noise." [3]
But while returns are diminishing, the costs of obtaining those diminishing returns have gone up tremendously ("exponentially"?) For example, Minix's time to compile itself (including the compiler) apparently went from 10 minutes to over 2 hours when they switched from their own compiler to clang.[4] For Plan9 the same switch caused times to go from 45 seconds to over 2 hours [5].
And with Swift we get even longer compile times, including the compiler giving up on one-liners after a minute, and the result is slower, too!
I could give many more examples, but generally Wirth's law holds: software gets slower faster than hardware gets faster.
I agree. Not to mention is it all really that different? I'm a little bit older, and admittedly I was living in the future when I was younger, but 20 years ago wasn't that different for me.
Back in 2001, I was sitting on a computer even a laptop! I was reading conversations on the internet, watching videos, and listening to music. Not much has changed in 20 years. I have a better pocket computer. I had a smartphone back in 2003. Yeah the iPhone > than blackberry, but not by that much.
To be honest with you, if you look at the technology as a whole, I prefer what we had in 2001.
Back then it was completely unusable (as opposed to now, where it's only borderline unusable), but at least it wasn't shoved down your throat by smartphone manufacturers and Google.
I'm assuming you're a non-native speaker? The definite article "the" indicates that I'm talking specifically about voice recognition technology.
Of course other areas have improved peoples' lives in meaningful ways, such as C++11 and video streaming (although I'd argue Zoom is worse than the telephone in almost every instance it's deployed).
I was working on Google Image Search at the time and recognizing photos of animals was bleeding edge tech. It's now a AI 101 project using an off-the-shelf DNN stack.
Imagine:
1) I'm the owner of a single family home
2) My income drops to roughly zero because of AI
3) My property values go through the roof
4) I now owe 2.5% of these sky-rocketing property values every year, to be paid from my non-existent income
Doesn't this scenario lead to me either selling my "gifted" shares to pay property taxes, or ending up a renter at best and homeless at worst? I can imagine this proposal leading to greater concentrations of wealth rather than spreading it around.
A variant of this is responsible for the rise of oligarchy during de-Sovietization in Russia. Citizens were given shares of state companies, but people's basic needs weren't met. This resulted in most shares being sold to whomever would buy them for any amount of money or basic resources. This, along with the general power vacuum, led to the rapid consolidation of massive amounts of power in the hands of whoever managed to wield local power for their benefit at the time--those who became the oligarchs.
Bill Browder writes a bit about it in his book, Red Notice. The book is also a great cautionary tale that the whole narrative that we can spread democratic ideals by making business deals with with corrupt/despotism regimes is smoke. It leads to more corruption, less moral authority, and further empowered despots.
If your property value doubles and you lose your job, why wouldn't you sell your property for a massive gain, take that money and buy a new house somewhere cheaper, thereby avoiding the tax issue entirely? Seems like a situation where you want to have your cake and eat it, too.
A property's value doubling isn't a one-off event, it'd be widespread and continuous. That cheaper place's value will rise proportionally, and now they've the same problem, except with an overall lower quality of life now that they're living in a worse property.
My property has doubled due to an incoming commuter rail line... which means my taxes will go up. I used to have the cheapest house in town, which means if I have to move, I have to find a smaller house in a less desirable neighborhood, or end up renting, then homeless as the rents rise.
The rich will get richer, and the rest of us will get poorer.
Even without the hypothetical AI effect on income, this is a proposal which will tax you 100% of the value of your property over a 40 year period whilst over the same period YC's LPs and founders will have paid just 2.5% of the [much higher] value of their companies.
Now there are efficiency arguments in favour of taxing land to encourage its use and not taxing productive enterprises or their investors too heavily, but this is pretty extreme...
> which will tax you 100% of the value of your property
If it's a property tax, yes. If it's a land tax, no. Under land tax you tax the "ground rent" value of the land, not what's built on it. "Ground rent" is what it costs to rent out your land if it was an empty lot with nothing on it. Property tax and land tax are very different things with very different effects.
The Georgist land value (which Altman suggests might be more practically replaced by a system linked to actual property transaction values) is still going to be a sufficiently large proportion of the value of a typical home to ensure pretty much anyone not living in a multistorey tenement block is paying massively higher tax rates on their home than anyone pays on a YC company.
The entire point of land value taxes is to turn land into a liability. You don't get to benefit from the accomplishments of other people. You only get to benefit from your own accomplishments e.g. by building a multistory tenant block and renting it out.
How are we determining that the land value increased because of speculation, and not because the land continues to become exponentially more valuable do to its location in a popular area?
Property values generally rise because an area has a very attractive jobs market. Overall, it's a benefit to society to incentivize people with no income to move out an area with a lower cost of living. This incentivizes more people move there and do productive work, which can be taxed and distributed.
I just find something very cold and socially undesirable in the idea that somebody can spend a lifetime putting in the work to get the home they want, only to be forced out because "society" decides they are no longer productive. I'm no NIMBY—those people shouldn't have the right to stop others from developing their own properties—but I'm not sure I like the idea of economic incentives kicking the least productive to the curb because it's "efficient".
> because "society" decides they are no longer productive.
You have to consider the benchmark. Do people deserve to live in a castle if they aren't productive enough?
Living in a single family home in the middle of NYC requires a whole lot of productivity because you are literally displacing dozens of other people. You have to be as productive as all those people combined to be worthy of replacing them.
So by your accounting, if I purchase a home in Stockton, CA right now, then in 40 years when I'm old and can't afford the taxes on my lifelong home because Stockton is huge then, I'm to be kicked out for a more productive use?
More likely, the government would place a lien on your property, and when you die or sell your home, the profits would be used to pay for the deferred tax, rather than simply accruing to you or your heirs.
And why should the home I paid for be auctioned by the government? How is that fair for anyone but the wealthy? You're saying that only the rich can stay in one place, everyone else has to move to the middle of no where or risk losing it all to said rich folk who will buy my property at auction by the government.
In my original comment, I stated "when you die or sell your home", so I don't see why anyone would have to move.
If you think people shouldn't have to pay extra when their land becomes more valuable, I don't see why they should still get the profits when their land becomes more valuable. That's basically socializing the costs, but privatizing the profits, which is obviously a bad thing to do.
If being taxed 2.5% a year counts as being "forced out," then staying in a highly productive area of land indefinitely is "forcing" people who can otherwise move to your house to stay poor. Never mind that it's the wealthy are the ones who benefit from elimination of property taxes.
It's society that makes the property valuable in the first place, so it makes sense to pay society back. The firefighters, schools, and social workers in your area need to get paid extra to account for the cost of living increases. That money should come from the people benefit the most from their services, the property owners.
> If being taxed 2.5% a year counts as being "forced out," then staying in a highly productive area of land indefinitely is "forcing" people who can otherwise move to your house to stay poor.
“Otherwise” is doing a lot of work here. The people can’t “otherwise” move there because the person isn’t selling, that’s the idea. Taxing people so they are forced to sell is forcing them out. Not taxing people so they are not forced to sell is letting them stay there. You’ve yet to explain why the people who do live there have less of a claim to the house than the wealthier people who would buy it from them.
> Never mind that it's the wealthy are the ones who benefit from elimination of property taxes.
Yes nevermind that since its not even true.
> It's society that makes the property valuable in the first place, so it makes sense to pay society back.
This reifies “society” as a thing-in-itself rather than properly considering society as consisting of the people who own the properties and make them valuable by their ownership, maintenance, and use. Then it equivocates “society” with the actual government that collects the taxes and decides how they are spent (typically routing them to their friends who sell goods and services to the government).
> The firefighters, schools, and social workers in your area need to get paid extra to account for the cost of living increases.
Cost of living increases like land value tax? Like how landlords pass increased taxes and maintenance onto their tenants?
> That money should come from the people benefit the most from their services, the property owners.
Its not at all clear that property owners benefit disproportionately from social services, and they also pay for those services through taxes.
I think most of this recent fascination with Georgism is a result of California tax policy and doesn’t withstand a cursory economic analysis.
Overall, I agree with you that Georgist LVT is nonsense, but overall, I think that an LVT would be an efficient way to raise revenue.
> Taxing people so they are forced to sell is forcing them out.
That's still very much overstretching the word "force." If the government taxes a cigarette factory out of existence, are they "forcing" the workers to move if they need to do so to find another job?
> You’ve yet to explain why the people who do live there have less of a claim to the house than the wealthier people who would buy it from them.
I view this as a meaningless philosophical question. There are so many ways that life can be unfair. Being taxed into selling your home at a huge profit is just not a concern I care about.
> Like how landlords pass increased taxes and maintenance onto their tenants?
This is completely untrue. Rent is solely dependent on supply and demand. Demand is elastic, and supply is very inelastic and even more so in highly desirable cities, so it doesn't get very affected by a tax. If property taxes get passed down to tenants, Prop 13 would have passed the tax savings onto renters, which it clearly has not.
> Its not at all clear that property owners benefit disproportionately from social services, and they also pay for those services through taxes.
Financially, a renter would be fine if their home burns down, becomes surrounded by used needles, or has a terrible school district. The homeowner reaps the financial benefit from these services, so they should expect to pay a share.
> Financially, a renter would be fine if their home burns down, becomes surrounded by used needles, or has a terrible school district.
Thanks for your reply. I think this statement is a good example of how I think your reply misses the point and so I’m not sure we will come to any agreement. Thanks.
> You’ve yet to explain why the people who do live there have less of a claim to the house than the wealthier people who would buy it from them.
You've yet to explain why the people living there have more of a claim to the land than anyone else in society. The model you've proposed is basically "first come first serve" (ie. homesteading principle). Except even that doesn't apply given that some of the land currently in private ownership was previously used by others, who were forced off it, via colonization in North America, and the enclosure acts in Europe. Should we return the land to the descendants of the Native Americans?
Given that land is a scarce good, and access to good land gives substantial benefits to those with access, "first come first serve" simply isn't a workable way to allocate land. Those with land are able to charge rents to those without, and they can pass this privilege down to their heirs, keeping this inequality going.
> Its not at all clear that property owners benefit disproportionately from social services, and they also pay for those services through taxes.
Here's a simple example. Suppose the government decides to build a new transit line going to the edge of the city. The rents and property values along the line will increase. And in most places, income and sales taxes fund at least part of the cost. So renters will pay some of the cost, but get no financial benefit, while also paying increased rents. On the other hand, the landowners will pay some of the cost, but they'll also profit from the increased rents and land values. Essentially, renters pay "twice" for government services: once for the actual service, and then again when the existence of the service leads to higher rents, which are then reaped by landowners.
> I think most of this recent fascination with Georgism is a result of California tax policy and doesn’t withstand a cursory economic analysis.
Basically every economist agrees with the principles behind Georgism, starting from Adam Smith and David Ricardo, and continuing to modern economists like Milton Friedman and Joseph Stiglitz, so I don't really know what you're talking about here.
You should read about the Law of Rent by Ricardo (https://en.wikipedia.org/wiki/Law_of_rent), which basically states what I have said here: land rent is equal to the marginal economic advantage, which is obviously not created by the landowner.
> You've yet to explain why the people living there have more of a claim to the land than anyone else in society.
Generally one would ask that the people proposing a change to circumstances assume the burden of proof; it should be obvious why I can’t remove the food from your kitchen and expect you to justify why I should stop.
If you want a more formal argument then its turtles all the way down, I can approach your preferred landowners on the day post-acquisition and use the same procedure to expropriate them, someone else can do the same to me on the next day, ad infinitum.
> The model you've proposed is basically "first come first serve" (ie. homesteading principle). Except even that doesn't apply given that some of the land currently in private ownership was previously used by others, who were forced off it, via colonization in North America, and the enclosure acts in Europe. Should we return the land to the descendants of the Native Americans?
You are aware that many people argue that we should do exactly that, correct? I’m not aware of many people who argue to the contrary, and I’m not sure there is any use in reciting their arguments here.
The expropriation of the Natives is almost universally regarded as a moral wrong in polite society. its fine for you to disagree but I’m at a loss as to why you would assume that I disagree.
> Given that land is a scarce good, and access to good land gives substantial benefits to those with access, "first come first serve" simply isn't a workable way to allocate land.
This is a non sequitur as you’ve failed to explain why one person’s good deal is unworkable for another.
> Those with land are able to charge rents to those without, and they can pass this privilege down to their heirs, keeping this inequality going.
It seems that you’re assuming that inequality is a bad thing. I think inequality is a fact, and the moral implications must be argued for rather than assumed.
> Suppose the government decides to build a new transit line going to the edge of the city. The rents and property values along the line will increase. And in most places, income and sales taxes fund at least part of the cost. So renters will pay some of the cost, but get no financial benefit,
Why aren’t the renters gaining financial benefit from improvements to mass transit in their area? It seems that you’re arguing against the government being able to fund boondoggles from taxes.
> On the other hand, the landowners will pay some of the cost, but they'll also profit from the increased rents and land values. Essentially, renters pay "twice" for government services: once for the actual service, and then again when the existence of the service leads to higher rents, which are then reaped by landowners.
I feel as though you’ve neglected to consider that the renters benefit from mass transit and therefore there’s no reason for them not to be expected to pay for it; and the fact that public infrastructure results in higher land values is covered under the property tax that we already have established. This whole thing could be bypassed by arguing that these types of improvements should be paid by property taxes (excluding sales taxes etc.).
> Basically every economist agrees with the principles behind Georgism, starting from Adam Smith and David Ricardo, and continuing to modern economists like Milton Friedman and Joseph Stiglitz, so I don't really know what you're talking about here.
Yeah if you think this is a reasonable statement of the economic consensus vis a vis Georgism I doubt we can learn much from discussion with each other, nice talking and have a good day. Thanks for the Ricardo link.
I don't really have the energy to respond to all your points (but thank you for making them, they've shown where I was unclear in my writing, or made assumptions that weren't obvious), but I wanted to mention one thing:
> This whole thing could be bypassed by arguing that these types of improvements should be paid by property taxes
This is pretty much the core policy that Georgism advocates for: "Henry George is best known for popularizing the argument that government should be funded by a tax on land rent rather than taxes on labor". The rest is just a way to provide economic/justice based reasons for this policy.
> I now owe 2.5% of these sky-rocketing property values every year
Just as a UBI gives people an income floor, I think that a land tax should come with a personal allowance below which you are exempt.
To do some rough calculations, the US state with the highest population density is New Jersey, at 1,210.1 people per square mile, which equates to 23,038 square feet per person. The average American house size is apparently 2,687 square feet, which is typically shared by multiple people, so the allowance could be comfortably set to maybe 10,000 square feet per person.
The key thing about a land value tax is that it's based on land value, and not land area, so the allowance should also be based on value and not area, for it to work right.
10,000 sq ft in Manhattan is much more valuable than 10,000 sq ft in a rural area, and so it doesn't make sense that both should be treated equally.
As I understand it, the fund would also pay cash. You would get a share of the 2.5% of taxed equities and a share of the 2.5% property taxes. Owners of property with a value over the average would be essentially paying everyone else.
Unfortunately I don't see price of healthcare coming down, just more doctors out of work. AI in healthcare will be highly regulated, probably monopolized, and tons of artificial barriers to entry created by corporate lobbyists. The AI quality won't make much of a difference: the winners will be the stodgy bureaucratic companies with political influence and just enough "AI" to be useful. WebMD will probably go offline and things like it will become illegal, "for the public good." Medicine isn't expensive because doctors have gotten more expensive; it's expensive because the industry as a whole needs to maximize its revenue, and AI changes nothing in that respect.
Entertainment will keep getting cheaper and cheaper, but for services people actually need: education, healthcare, and soon food, it's going to continue to be "how much can you afford to pay?" Or worse, "how much can the government afford to pay on your behalf after we've milked you for everything."
I'm not sure that entertainment will keep getting cheaper. Where I live, sports and concerts seem to be getting more expensive for the top bracket in each case (pro sports and major international musical artists). We're subscribed to four TV/film streaming services and a music streaming service, that when combined start to add up and aren't getting cheaper. If you own the most recognised franchises and stars, people will pay.
YouTubers would be a counterpoint to that, I guess?
I think prepared food should at least get cheaper as it's dependent on commercial kitchens and staff, two things that will see a lot of change.
I predict counterculture AI will be a thing, and it will compete with corporate lobbyists at their own game.
WebMD may go offline, but countercultureMD will come online in its place, and it will be more data driven. I don't know whether that's good for medical care; maybe it is.
I still dont really know about AI taking over the world. The most expensive things in my budget are housing, car, healthcare, childcare, flights/hotels, food. Does AI really change that much?
There are definitely too many over-educated people out there already, I'd think this is more of an impact & setting up disappointment than the bots.
Phrasing things as an optimization problem can result in better, more efficient arrangements than how things are presently, but only within the limits that people are willing to accept. It also depends what we're optimizing for - if we naively set it for "maximum number of humans fed and cared for", we really are all going to be eating bugs and living in pods.
Sounds like an instance of the No True Scotsman fallacy to me, friend. What is capitalism if not the systems that purport to be it? It's like saying "communism has never been tried". They tried _something_, and they certainly labelled it communism.
That said, I do find the Equity Fund idea interesting, though it's not entirely clear what this looks like in practice, especially for the unbanked, the mentally ill, homeless people, etc. who might not really know what to do with shares, since some of them don't really know what to do with cash, either. Seems to me these are the people most in need of uplift, no?
I'm not too worried about most lawyers getting automated out of a job anytime soon, after all, to the extent where I want to see the economy overturned for the likes of them.
Ah, fair. Yea, perhaps the capitalism we have tried is the closest that is practically possible.
I think we overweight how many people in society do not know what to do with cash. I think we may say that we believe they can do something better with it and sure perhaps, but it is bold to believe one knows what to do better with another's resources.
The time you spend in your car is more valuable than the cost of the car itself (including gas, repairs, etc). So insofar as you can free that time with an autonomous vehicle, it can absolutely slash the total cost of transportation.
To a lesser extent, similar things can be said about your other examples.
Robots building houses on cheap land due to mass work form home would make housing much cheaper. Robot doctors make healthcare much cheaper. Robot teachers make childcare/education much cheaper. Robot farmers + GMO make food much cheaper.
If a house is all I needed I could just move out to some 20km away location and get one for 60k€. You would have to do your own renovations but isn't that part of the deal when you buy a house? But I will concede that automation increases productivity and gives us access to more goods and services. It's absolutely necessary.
I don't think building costs are the main issue with availability of housing. You need the land to build on - and not just any land, but land in desired locations.
How are the artificial mega cities in China doing? Didn't they build several cities from scratch that are supposed to house several million people each?
Plenty of people would like to live somewhere rural but can't because of work. Obviously not everyone falls into this category, but lower demand in cities = lower prices.
> technological progress follows an exponential curve
This is a mantra that I keep hearing, but if I compare the progress from 1900 to 1960 and then from 1960 to 2020, it almost seems like it has been flattening... Sure, we have internet and fancy computers, but the progress from 1900 to 1960 was immense: air travel, electrification, proliferation of automobiles, immense progress in medicine, space exploration, nuclear energy. Even the MOSFET transistor was invented in 1959.
Not saying there wasn't any progress between 1960 and 2020, but it sure doesn't look "accelerating"...
He’s rehashing Kurzweil’s analysis of history, which is to broadly fit a few data points to show that exponential growth is baked into the universe. And then go in to claim that the next 100 years is going to be something 20,000 years of 20th century progress.
But I don’t see that the 2000s are progressing any faster than the 80s and 90s. It looks fairly linear since the 50s or 60s overall. smartphones and Deep Learning are incremental progress over what existed before.
Well, it's hard to view an exponential when you are on it, because the time steps are so small, it can feel linear. As a 24 year old, I can see the effects of internet+smartphones has on people my age. We navigate information so much more efficiently than older generations, not just from technical skill, but from the standpoint of how we frame mental models, ask questions, etc. The early 1900's were a revolution on atoms, and the past couple decades were a revolution on information. It is harder to see and measure in general, especially when we are still in the thick of it.
While smartphones certainly are a boon to quickly navigating information and accessing lots of ideas, PCs did that for people in the 80s and 90s to an extent, and the internet has its roots in networked computers across universities and other institutions in the 60s. Incremental improvement means that overall what we have today is just a few decades worth of steady improvement in hardware and software from the initial implementations.
And while the culmination of the computing revolution today is disruptive, compare that to how many technologies and sciences were undergoing disruption in the late 19th century through the mid 20th. Of course computer hardware lends itself well to exponential gains, but it's less clear how many other things have done so over the past several decades, including software, which seems to be a bit more linear in its improvement. The hardware today is vastly more powerful, but the software often does not take full advantage of it.
> In the next five years, computer programs that can think will read legal documents and give medical advice.
Bad medical or legal advice is completely possible. It exists now.
Giving good medical or legal advice requires, at a minimum, being able to carry out a full conversation to investigate the problem, including understanding things not directly related to the field. There's no sign of getting that any time soon.
Efficiency improvements in an area like the law may also result in people imposing new burdens, eroding the efficiency gains. Laws that once would have seemed too burdensome will no longer be seen as such.
There are some really smart ideas in here, and some really smart assessments of existing policies.
The thing I was most taken aback by was Sam's suggestion to tax privately held land, and capital (as opposed to labor tax).
I would love to have Sam and PG go toe to toe and discuss how Sam's proposal is different from the wealth tax post PG made. I don't immediately see how Sam's idea avoids the "wealth tax compounds" problem (his words not mine) that PG is worried about.
I don't see the issue with the wealth tax compounding, because the wealth also compounds.
That's exactly the "problem" with wealth (from the perspective of society's growing wealth inequality). Wealth compounds much, much faster than income grows. Someone who inherits $3 million (not much from the point of view of the wealthy) can live comfortably on the growth alone while still compounding their wealth further every year.
The only way a wealth tax would compound faster than the wealth itself is if it is larger than the growth rate of the wealth. And since that's averaged at ~8-10% over the past few decades, a 1% tax is not going to eat into a person's wealth over time. It's simply going to slightly slow that growth down.
Land Value tax has a long history in economics; it acts very differently from capital and wealth taxes because Land really behaves differently from those two classes of things. I highly recommend reading Henry George on the subject, who originally popularized the idea.
Note "Land Tax" != "Property Tax." Land tax taxes only the value of the underlying "ground rent", NOT the value of the improvements (stuff you build on land). Property Tax taxes both.
Sam's proposal doesn't avoid the wealth tax compounds simple arithmetic that PG notes.
And yes many important questions for society to answer on this, e.g. how much does is disincentivize entrepreneurs if they have half of their wealth taxed away over the decades compared to current taxation system?
Just a thought experiment: let's say we take Sam's ideas alongside something like UBI, where everyone has a baseline of income provided by the society they live in.
You succeed wildly, and get rich as an entrepreneur. Sadly, in a generation or two, your grandchildren will be back with the rest of the plebeians, despite grandpops launching YC, writing books on art and coding and creating an bunch of amazing companies. But, your grandkids are now not motivated by escaping the poverty they live in, but by a simple desire to live differently than the other normal people out there (also living on UBI).
This seems a lot like what happens in places like Russia or Venezuela or Brazil, where the best and the brightest (often from upper crust there) flee their countries to make it big in Europe, US or the Middle East, but not always because they have such horrible lives there.
Except that, unlike entrepreneurs driven by a mindset that has them feel like it is never enough, these ones are just trying to escape the ennui of boredom of suburbia, and slipping back into that isn't so awful. The alternative drive of escaping poverty does something very different and rapacious: see Tyco and Dennis Kozlowski: https://en.wikipedia.org/wiki/Dennis_Kozlowski, who despite enormous wealth couldn't stop himself from having his company pay for even his rugs.
It's like the best of communism, and the best of capitalism!
</joke>
Seriously, isn't there an interesting space for entrepreneurs in a new world like the one Sam is describing?
In the Soviet Union, there was no such problem - brilliant people by and large were happy to become scientists and engineers, and scientists and engineers got into the planning agencies and into the government too, in droves. Same in pre 1989 China.
I don't see why the USSR and Mao's China were able to retain (and sometimes even attract) these people, but the society you're describing wouldn't be able to.
Actually, after some digging, I found something Lenin wrote about what to do with the entreprising kind of people - he wanted them to be put to use in organising projects and production, whereas the Kozlowski type were to be ignored (or worse).
So I guess the solution he found was to allow them to create big organizations and projects, but instead of paying them in money, they were paid in social status and achievement. If that worked to retain people like Kolmogorov, Ilyushin, Kalashnikov, Korolev, etc..., couldn't simply socially different positions for people that are enterprising be sufficient?
The brilliant people that stayed in the USSR had no choice - they were kept there by force either directly (not allowed to leave) or indirectly (leave but your family will pay the price).
But they all wanted to leave. The more you knew how much better your life could be in the west, the more you hated staying.
See, I don't know if that's true. The main counter evidence to your hypothesis were the defections of scientists and engineers to the USSR, the amount of brilliant people that regretted the fall of the USSR, and so on.
For example, one of my brilliant math teachers was from a Soviet state, and had the opportunity to leave all along - he only did so as the USSR fell and he did not see any prospect in the East anymore.
Patriotism is a strong emotion. But beyond that, many brilliant people in communist countries really did enjoy a very elevated social status - if you see what children aspired too, being a scientist or engineer was really up there. And as far as job security and research freedom, for example, there was often quite a bit of it. On the other hand, you had drastically less freedom, but it doesn't seem the ones who chose to stay valued it as much as we would.
See, I know that is true. I was unlucky enough to grow up behind the iron curtain. I know the situation directly from the choices faced by my parents and their friends and after its fall I watched the best of my teachers slowly but surely emigrate to Canada or the Western Europe.
You can't really understand how it was unless you lived it. First of all, patriotism ceased to exist, except for propaganda. Struggle and fear - abject fear - replaced patriotism as the driving emotion. We ended up hating our country - we're still trying to re-learn how to love it 30 years later.
The ones in elevated status were collaborating with the authorities and the secret police. They ratted out on their friends and family. Everybody hated and feared them because of that.
The freedom was, of course, gone, and we got used to that. Freedom is just not that important when you're hungry. But the feeling that best described our state of mind then was: hopelessness. We did not, could not hope for a better future, for better times for us or our children. We could not see any escape, any chance at change. Because as individuals, there was nothing we could do. We were completely robbed of our agency, of our power, of our rights. The past, present and future was a single color: gray.
People who somehow went to the West came back changed. They just could not believe one could live with so much freedom, choices and wealth. Their stories inspired others. I was maybe 12 and I remember clearly dreaming up ways of running out of the country, to my father's absolute horror. I would cry rivers if my own children would have to go to through that.
The issue is, you're just someone on the internet. The real people I know disagree with you, and so do statistics, so while I completely empathize with you, I can't agree.
Use your logic then. Ask yourself who was defecting where. Ask yourself who had a the better level of life and who had the gulags. Ask yourself why so many countries and their hundred of millions of inhabitants violently overthrew the communist regime at the end of the '80s.
Better yet, go ahead and pay a visit to the communist success stories of N Korea, Cuba or Venezuela. They are still around. Maybe they will convince you.
Then finally look around at the very tools you are using. The car you are driving, the furnace heating your house, the computer you write on. They are all success stories of capitalism. Ask yourself where are the success stories of communism. What things it built, what hard, concrete, useful stuff it created. Believe the proof presented to you by the real world - and reject the propaganda.
The majority of the citizens of the USSR were against its dissolution - but that didn't matter, it was mostly an elite affair.
I look at statistics and what people that I know and lived there told me. Most people who were adults at the time seem to regret the fall of the Soviet Union, and most people from post-Soviet states had to leave when they couldn't make a living anymore as the economy collapsed - despite post Soviet states being squarely in the middle of what you can expect from life on earth and above the median in all relevant metrics.
So certainly, it wasn't perfect, but it was very far from hell on earth, and squarely above the middle.
Your argument would also be stronger if you didn't classify Venezuela, a country with a bigger private sector relatively than France, as a communist country - Cuba, I've went there, was far from bad, much better certainly than where I came from, and despite debilitating American sanctions has a GDP PPP of over 21000$ which is quite impressive and above the average for the world, let alone Latin America, and by far the best of any country sanctioned by the US, and North Korea abandoned communism for a long time for their own "Juché" ideology which is basically Strasserism, preaches the superiority of the Korean race, and now allows private markets too.
Your stories are also quite telling - the computer I'm using was only possible under capitalism because the government gave itself the power to control ideas, as capitalism is otherwise incompatible with large-scale intellectual innovation. I drive no car, as it is far inferior to good quality public transportation plus walkable neighborhood, and my house is electrically heated by 100% renewable energy because we had the good sense of nationalizing the power grid and making massive investments in renewable energy (which we now produce at costs lower than any free market of energy, renewable or not).
The actual evidence when I try to look it at critically, shielding myself from all forms of propaganda (in the classical sense of the word), makes it clear that reality is far more nuanced than is common wisdom in these circles, and one of those results after careful study of history and data is that the USSR did not, in fact, have much of an issue retaining engineers and scientists, and relative to its size and prosperity did an okay job at innovating and keeping its population happy. Far from the best, but much better than most.
As far as I can tell, it's not different. Unless you could consistently generate the 2.5% property value to pay the tax each year (in a world where AI has sent incomes to zero!), then you'll eventually lose your property.
Another way to look at a land tax (instead of as a wealth tax) is that it's sort of "user fee". You're paying society "rent" (a land tax was called ground rent by Adam Smith) for depriving the commons of that land.
Typically, land value tax is based on the rental value of land rather than the market value, so it's even more in line with this model.
We want to move to a society where land is not treated as wealth, but rather as a resource to use (because that leads to less land speculation, and more productive use of land), so this model is fine.
The estimation of 15% loss of market cap due to the 2.5% cap tax is laughable. That is effectively a reverse buyback of 2.5% every year. It would leave many companies with 0 or negative profit. Any company with P/E above 40 immediately loses money. 20 has their profit chopped in half. The values of these companies would be reduced by at least 50%.
That's just an argument for why wealth taxes should never go above 2%. If you can't double or triple your wealth in 60 years what are you doing with it?
This essay doesn't address any of the key problems often raised when talking about AI and UBI.
Here is Australia we are already seeing one the effects of very cheap stuff. Skilled labor is significantly more expensive than buying things, so anything that has to have an Australian involved is ridiculously expensive. Calling the Plumber to fix a drain? That's a week of groceries or perhaps a new TV.
At some point soon there is going to be a fairly significant upheaval, not just when AI takes some peoples jobs, but when it's just not worth it to pay somebody $200 an hour to do something for you. I think to some extent, peoples time is linked to the price of products you can buy.
Eventually, Plumbers won't need $200 an hour to buy their own things, they can drop their rates. Wages everywhere might fall.
Taxing land sounds fair enough, but I was under the impression we are already subsidizing a lot of farming anyhow, and since we are racing to the bottom on food pricing, there is not much profit to be made on food anyhow.
Any aren't we supposed to be reclaiming land to turn back into national parks, planting trees, and capturing carbon.
> Taxing land sounds fair enough, but I was under the impression we are already subsidizing a lot of farming anyhow
I'm not sure what you're referring to here. Taxing land doesn't really have much connection to farming specifically. In fact, most high value land is urban areas and cities (NYC, SF, Silicon Valley, etc.), so farm land is mostly irrelevant when talking about a land value tax.
> Any aren't we supposed to be reclaiming land to turn back into national parks, planting trees, and capturing carbon.
To do so, we need to use more land more efficiently: less sprawl, and more building up/dense. And a land value tax incentivizes precisely that, since in high value areas, the tax would be high enough that it dense buildings would be the only real financially viable option. This increase in density in the core means we don't need to develop other land (and even reclaim some of it to parks as you suggest).
The GDP per capita of the world is only about $12,000. That's technically "what we need," food, clothes, and shelter for a family of 4, but on a per person basis, it's below the federal poverty line in the US.
I don't think it's correct to say that we had enough wealth a long time ago. There are a lot of places in the world that are still desperately poor by any measure, not just by the standards of the wealthy. And although it's undoubtedly true that the wealthiest few deciles could give up many luxuries to provide more for the poor, it's much more arguable if there is enough for everyone to have enough without generating much more.
This isn't a realistic way to project consumption, though. A single adult living alone has more expenses per person than a family of four under one roof. An elderly couple will not get by on $24k. (And half of people are not under 18, 21, or 25 as you may prefer to define childhood.)
Healthcare is around 10% of GDP (bar some countries that have organized their healthcare system inefficiently like my native Germany).
That still leaves you with $10.800. A liability insurance costs here around 50–100€ per year for a family. What more insurances do you need? GDP per capita also includes pensioners, so you do not need to count pension into this.
My understanding is that reducing a single person's income from 12,000 to 10,800 is a big deal. And in any case, my original point was that this is barely "enough," and this is assuming a perfectly efficient and equitable distribution. It may actually, technically be enough, but it's poverty. It's only not poverty if you also receive the income for people you take care of who don't actually need all of the income (like kids). I don't think arguing over whether $11,000 is as good as $12,000 is particularly meaningful because I don't think $12,000 is enough either, and my main disagreement with the original poster was with the statement that there has been enough for a long time. That seems unequivocally untrue to me.
These arguments always ignore the fact that the only reason $X GDP is generated is because people are incentivized. Children don’t produce, so someone else is producing for them. If you tell people they will get $12k no matter how hard they work, they won’t work. We’ve tried socialism.
> If you tell people they will get $12k no matter how hard they work, they won’t work.
This is obviously not true, since in every capitalist society, the hardest working people already make the least amount of money, and the laziest people employed already get given the most amount of money.
Capitalism has already proven that financial incentive has no correlation to how hard someone works.
What?? It’s the exact opposite. Those people HAVE to work that hard precisely for the reason I mentioned: they won’t be paid otherwise. This is precisely the issue. If you paid people irrespective of how much they work, they won’t work.
Your example is evidence of my statement, not refutation.
Do I really work harder as a software engineer than the garbagemen who bust their asses doing a dangerous smelly job? Or teachers wrangling a room full of kids?
I think you're basically agreeing with the comment you replied to. They're saying people who are financially secure (possibly like a software engineer) are less incentivized to work hard. Someone who really needs the money is more incentivized.
If you got the same pay whether or not you showed up to work each day, you'd be less incentivized to show up.
I don't remember from school the bit where Mr Socialism said "let the workers sieze the means of production so they can shutdown the means of production, because they are lazy and stupid". "define:Socialism"[1] - "Any of various theories or systems of social organization in which the means of producing and distributing goods is owned collectively". NB. that it involves production, and is /not/ "Systems of social organization in which lazy people get paid for doing nothing".
> "If you tell people they will get $12k no matter how hard they work, they won’t work"
Have you never seen or heard of volunteers? There are countries where the unemployed get money, e.g. the UK, and yet most people still work. How does this fit into your claim?
Yes, they are obviously doing the work for other reasons.
> There are countries where the unemployed get money, e.g. the UK, and yet most people still work. How does this fit into your claim?
You aren’t getting it. Those people still work because they can earn more than what the unemployment is. If all of a sudden you said even if you work, you’ll only earn the unemployment benefit, then nobody would work.
Guys please stop deciding you want to be upset about a comment and then trying to backsolve your rationale.
> "Yes, they are obviously doing the work for other reasons."
Yes, and that disproves your claim that people only work for money, and without more money people won't do more work.
> "You aren’t getting it. Those people still work because they can earn more than what the unemployment is." If all of a sudden you said even if you work, you’ll only earn the unemployment benefit, then nobody would work.*"
Volunteers earn no extra money for their work, and yet still work. Many people work unpaid overtime out of loyalty for their employer / coworkers / customers / patients, many people work out of passion and interest and hobbying ("starving artist" trope).
And, again, you're propagandising "Socialism" changing from "people get money without working" to "people can't get more than a fixed limited return on their work", which again it isn't. But let's go with that definition - if everything collectively owned, the more stuff there is the more stuff is collectively owned, so the more benefit everyone gets. When the Federal Government builds more roads, you personally get more roads you can drive on as a benefit. So even if Socialism was "you can't earn more than the minimum wage {because the collective takes it from you!}", you are part of the collective, so it still wouldn't be the case that people got no more benefit for producing more, as you claim.
GC claimed that food is all that is needed, and that their wants were satisfied by the additional of the other two things.
The fact is that the average human is not actually content to be one notch above animal with "food, sex, and purpose". That is why we have progressed much further than just accepting those basics as all we need. But I think our improvements on those things do provide enough value for the average person to be happy.
- Tasty food
- Safe sex (and relative ease of reproduction)
- Multi-variate / chosen purpose.
Plus other methods to remove annoying friction from your life:
- Optimized shelter
- Optimized travel
- Consumption of various raw goods (not for food and not for shelter). e.g. 3D Printers!
Human wants for things are endless. Human desire for extra time, even more so.
Human needs are subjective to each and every human. Why does Bezos get up in the morning?, if you’d like something aphoristic you may reply to in many creative ways.
There is a lot of hubris in saying we’re pretty much maxed out now, thanks and time to stop.
I’d suggest instead we need to make smart choices, and that usually smart answers are not found at the far extremities.
This “we have all we need” bit reminds me of scientists saying physics was over in the 19th century, combined with a bit of Thomas Malthus in such a way that we all die unless we halt innovation. It reminds me of that, but I’d be overstepping to put those words in your mouth. After all human intention has endless range to match the rest.
>Human needs are subjective to each and every human
After a certain point that's true but I think you miss the point of the parent. There are many, many hungry people in the world, many people without shelter and further millions who have no access to healthcare education or even clean water. These are not subjective needs.
Those people are in that position inspite of the fact that we could, with the wealth we have, feed, house and provide health care and education for each and every one of them.
> Those people are in that position inspite of the fact that we could, with the wealth we have, feed, house and provide health care and education for each and every one of them.
I don’t think this is really true, part of the problem is that the wealth that you think could be allocated to this problem was only generated by incentivizing people to build it with compensation, if you turned around and took it away from the people who built it and gave it to people who did not build it, you would disincentivize creation of more wealth, and the wealth you reappropriated would deteriorate, because you neglected to create an incentive and maintenance infrastructure to keep it working. This is exactly what happened when we tried this, starting with poor people in our own communities.
Public housing projects and welfare programs are money sinks that are notoriously counterproductive at meeting their stated needs; and we can’t get people to consider the structural imbalances that result because the need for these programs is an article of faith and among the believers the only acceptable reasons for their failure are “people who don’t agree with the program” and “lack of funding.”
Ah, thanks. I felt that was a separate point to the idea of stopping progress because we have enough. I didn’t realize that was the main point?
Yes indeed, we have enough to ease those burdens and it’s a terrible thing that they continue!
I've spent what is likely way too much mental energy wondering about this, and I'm no closer to an answer. Is there any limit to lifestyle inflation? Is it possible to have growth that simply outpaces what a human could possibly consume? Intuitively it seems obvious that there should be something like that, but in the 1800s our growth today would seem like it should be enough.
Why should there be a limit? If you can command robots to build anything you can ever imagine, who doesn't want his own Versailles - with impressive towers like the Burj Khalifa? Who doesn't want to fly their jet or space rocket just for fun to the moon and back?
And humans will be humans. There will be new games, like drone wars on distant planets, where any production capacity and energy will be used. And since everything is very efficient, there will be no food left for birds or even poor humans.
I don’t. There is the consideration that more money comes with more problems. You can say that more money would fix those problems, but at the end of the day, you still had to spend energy thinking about it.
You can quickly approach a situation where time is the limiting factor. In this case I think that the private jet or extremely fast transportation allows you to get some time back. Beyond that you might have one or two projects that you really enjoy, like a palace, but you don’t really have enough time to handle much more. Elon is a good example: he’s got a few projects that he really cares about and does them at an extreme scale. He effectively has unlimited resources but he would not make any progress on his three major initiatives if he was much more fragmented than he is.
And if you run this to the extreme, the true cost of overconsumption creates the problem of environmental damage and negative externalities on others that can wind you up like Marie-Antoinette.
Plenty of other people are happy with minimalism. And that can be hard for some folks to understand if they aren’t minimalists.
I agree. All I want is peace. Time with my family and friends, a garden, time to read, that kind of thing. Why anyone bothers with loud cars or big houses with huge lawns is beyond me.
I don't; one can only drive one car at a time, one can only be in one room at a time, one can only eat one stomach full of food in a given period, one can only read/watch/experience at most 24 hours of media in a day.
Once your Versailles is big enough, you won't be able to walk it in a day. Once it's bigger than that, you won't be able to drive its length in a day. Once it's bigger than that, you won't be able to travel its length in a lifetime at light speed. There's a limit for you. But you likely won't want to spend your entire life travelling at lightspeed to the far wing of your house, then die. So that drops the limit enormously.
What does it mean for it to be "your Versailles" - could you draw or depict Versailles in detail from memory? How will you verify that your clone is exactly like the original? Do you care? Do you really mean that you get to design your own mega-palace? So now you spend your life choosing furnishings and layouts and architectural details - hope you like that kind of passtime, because there's a lot of it. But if you don't like that, why bother having "your Versailles" instead of going to look at someone else's for a few hours? Or look at a picture, for that matter? What are you going to do with your Versailles? Are you a king or queen with courtiers and subjects so that you can have extravagant parties? Are you going to organise the food and cleaning and heating that the robots do?
How old are you, were you around when computers ran at Khz speeds? And now you have effectively "infinite computing power", you spend your time commenting about Bitcoin on HN - why aren't you simulating your own Virtual Versailles and flights to the moon and stuff? Because it's not that interesting now you can do it? Endless hedonism is boring.
> "Who doesn't want to fly their jet or space rocket just for fun to the moon and back?"
OK, that's taken a week of sitting in a tiny box waiting and doing nothing. What about the rest of your entire life?
Listing fancy sounding things is what religions do to entrap people with dreams of heavenly afterlives. All you have to do is look around you at all the things you once wanted, and suddenly don't once you attain them - the drawer of abandoned Raspberry Pies is a common one for HN people to notice, then start to internalise that you can have any film ever made delivered to you from Amazon for a few bucks, and you don't, you can't think of a film you'd rather watch than comment "Make your own exchange." on a Robinhood thread on HN. Got a wardrobe of too many clothes? Got boxes of unused stuff? A garage of tools and spares?
Endless hedonism is boring until there is competition.
DenisM mentions status in his comment. Status will demand Versailles bigger than can be passed at light speed during a life, just to impress. There will be galaxies full of combat drones, just to keep the balance in fighting power.
I already have everything in the Universe outside your lightcone as my personal Versailles most remote wings, and you can't prove otherwise. My robots are on their way back and information about them will arrive with you approximately a second after you die, whenever that is.
See what a pointless status grab it is? If it's outside all possible knowledge, it may as well be lies (it's not though). You can play Elite: Dangerous if you want a galaxy full of combat, and it's happening right now and better than the rest of the Milky Way there are actual players and ships and things and not silent void. The main lesson I took away from Elite Dangerous is that the Galactic PowerPlay between all the major factions can never end. If it ends, if one side can dominate and win, there is no way for another faction to recover from that without a reset and restart, like all games - play, end, restart.
> "Endless hedonism is boring until there is competition"
Competition doesn't need ever increasing resource use and hedonism, it's not the resource use which captivates people (but it can make a spectacle); competition is fine with animals running, with kickball, with Chess - 32 pieces on 64 squares creates world champions, millionaires, tournaments, audiences, lifelong obsessed people, gambling opportunities, it doesn't need galaxy spanning resource use. The thing about competition is that you can't be Usain Bolt or Magnus Carlsen or John Carmack just by throwing more resources at it. At the point where you can say "I have a Versailles on every planet in the Milky Way" and someone else says "so what, everyone has", there's no competition there. If you claim you can win the Tour de France on a bike in a small region of Earth, people will sit up and take notice.
What if it's not about keeping individual humans comfortable with nice experiences but about growing the amount of awareness? We think of humans as a resource problem but they are also the source of innovation and creativity. Will resources be limited if there is the chance to grow the number of aware beings to new heights with the potential to reach new levels of civilization?
Modern, first-world society does seem to be reaching some sort of inflection point that might point to a "top" (of physical consumption at least) as we get more efficient and more stuff is moving digital. That's not to say there is really anything conclusive, but it is interesting to think about.
Status is a big deal. A wealth differential allows one to order other people around, building up status. Those others then feel the need to get out from under the yoke, or at least to be in the position to order around other-other people. All of this requires continuous wealth accumulation to which there is no limit. You would have to redefine status to end this game.
Also note that a situation where humanity's productivity is expanding is way better from a social standpoint than one where it is stagnant. The first allows positive sum games to exist. The second is a zero sum game. Of course there is a limit to growth as the reachable universe is finite.
If the goal of wealth accumulation is not actually to be better off but to be better than your neighbour (as it is), then positive sum games become zero sum games functionally.
So no, that's not really a solution either.
But even then, the goal isn't to limit human productivity, is it? It's to limit how much we work and lifestyle inflation, which doesn't require growth to go to zero.
"A house may be large or small; as long as the neighboring houses are likewise small, it satisfies all social requirement for a residence. But let there arise next to the little house a palace, and the little house shrinks to a hut. The little house now makes it clear that its inmate has no social position at all to maintain."
We’ve shifted the goalposts from “providing every human the dignity of a home” to “providing every human a home that doesn’t look too shabby when compared to the highest bar in society”
Certainly, the first is enough, I'm just trying to answer why it is that people increase their consumption without end while it doesn't really make them happy.
See that's exactly the thing. We can point to excess today and say "How could that be sustainable" but it seems like the novelty would wear off, no? Like in some hypothetical future where resources are 1000x more available, would people launch 1000 cars into space? It seems unlikely. Somehow I feel like there is some inelasticity to consumption that we just haven't reached yet. I'm not quite sure why I feel like that though.
I don't think launching sports cars into orbit is inherently any more wasteful than say the development of the Deep Blue chess computer. It may have been a vanity project, but the ultimate goal was to test a proof of concept.
I didn't say it was wasteful or make any judgments about it. It's just a fact that a level of lifestyle that allows a person to launch his personal sports car into space has been achieved.
It's a level of lifestyle that allows a person to donate his personal sports car to replace an inert mass, because he'll buy another car.
While he and many others could buy personal space launches, that launch is not a demonstration of such. He wasn't paying for it, and it wasn't for him.
I could afford to donate a $100k sports car to be launched into space if I really wanted to. Lots of people here could. How many billions of dollars short of being able to set off a chain of events that actually make that happen do you think I am?
That's the thing, being at an influential and part-marketing position at a space company isn't inherently a lifestyle thing. It's a rare opportunity but I could easily see a world where some engineer's car went up instead.
I mean I don't think that quite qualifies as lifestyle inflation. A Roadster in Space just costs Elon $100,000 since his business planned to launch the rocket anyways. That's nothing compared to the price of a megayacht.
Except getting a $100k car shot into space also probably requires personally building the company that is "launching the rocket anyways." The SpaceX waiting list to launch junk into space for laughs is a very exclusive club.
I would say the Dear Moon project is more of an example than launching the car. The car was just for an initial test flight. It took the place of a block of aluminum like in one of the Falcon 1 launches.
The question isn't lifestyle in the abstract. Everything that people buy today is specifically intended to impel more buying. Whether that's cars, houses or sugary foods. The situation is incredibly different than simple "everyone gets what they need" society.
But again, it seems like there are physical limits to how much people can consume. Like, we can agree that if everyone had a machine that could magically summon up to 10 thousand cubic meters of material every day, we realistically would have universal material abundance. Even if we exceeded what the boxes could make, one or two dedicated to making more would result in runaway exponential growth that would speed up much faster than human consumption could.
Obviously that's the extreme case, and the question is how close to universal replicators do you need to come before people can't want more things fast enough.
> how close to universal replicators do you need to come
We'll probably reach technological self-replication singularity before AGI. I envision a small self replicating/repairing/transforming factory that could function based on local resources. Mostly 3d-printers, robots and tools for making tools.
But I think in reality there will be limited resources, energy and pollution we can all use, so we can't have our exponential utopia. Technology will be more like biology, and it will get good at recycling anything.
There is a manga series which is set in this concept of exponential self-replication technology gone wrong somewhere in the past (thus a futuristic dystopia):
>> The "Netsphere", a sort of computerized control network for The City. The City is an immense volume of artificial structure, separated into massive "floors" by nearly-impenetrable barriers known as "Megastructure". The City is inhabited by scattered human and transhuman tribes as well as hostile cyborgs known as Silicon Creatures. The Net Terminal Genes appear to be the key to halting the unhindered, chaotic expansion of the Megastructure, as well as a way of stopping the murderous robot horde known as the Safeguard from destroying all of humanity.
I kind of am? Besides, people don't need to never be stupid again, they just need to become stupider at a slower rate than productivity increases. If hour by hour production significantly increased, how could people possibly waste enough?
More importantly, would they? We waste resources to signal wealth. Wasting an ever greater proportion of your allocated geyser of materials doesn't signal anything.
There ought to be a point at which continuing increases in income fail to generate increases in either life expectancy or the proportion of adults who are able to work (not necessarily working). It would be necessary to distinguish this from the effects of anti-aging technology, but it should be possible in principle; improving diet/sleep/exercise and reducing pollution exposure isn't "anti-aging technology", nor are childcare/education.
I've also spent mental energy on this, about 4-5 years, and recently I've been reaching a conclusion (in a great walkabout about AI, ethics, and the meaning of everything).
My conclusion: individual satisfaction is bounded, as long as we have bounded brains. First I should mention that the best principle I've found to underlie life is that we should maximize or optimize some kind of experience of consciousness, for every conscious entity. It's hard to define precisely what that entails, but we have quite good intuition: your life should be rich in activity, in interaction with others, in learning, in thought, in seeing, hearing, thinking; of course, not so rich as to be overwhelming and collapse the whole thing or leave us unable to digest or grasp or understand (at least a part of) what we're experiencing. I don't claim to be completely original: Wilheim von Humboldt for me is one of the great thinkers of conscious motivation (he lived in the 18th century).
"I am more and more convinced that our happiness or unhappiness depends far more on the way we meet the events of life, than on the nature of those events themselves." -- WvH
Being clear: what matters is not the experiences themselves, i.e. the input/output, but what the various consciousness apprehend. What goes on in your brain. It doesn't matter you're at the most beautiful beach in the most beautiful sunset behaving joyfully and peacefully if internally you're depressed or in despair.
"The true end of Man, or that which is prescribed by the eternal and immutable dictates of reason, and not suggested by vague and transient desires, is the highest and most harmonious development of his powers to a complete and consistent whole." -- WvH
You can only make an individual so complete, so harmonious with itself. Our brains have about 100 billion neurons, i.e. a finite number, and there's only so much you can activate those connections. Really the goal is not with any single individual -- our goal should be with every conscious being. That's why we should not plan individually, we should plan as a society. A billionaire can only get so happy -- he can keep linearly stacking jet skis and race cars and yatchs but his happiness won't follow (linearly). We should realize we are all part of a society, as a whole, and ideally be completely indifferent among individuals (i.e. everyone deserves as much happiness as we can collectively get them).
In other words, we should take the Golden Rule literally. (of course, in practice, not everyone can be responsible for every other individual, but it should be our ultimate guiding principle, really, as individuals and society, unmistakably): every conscious being has the same value to yourself as yourself.
Because individual satisfaction is bounded, this allows maximizing the practically unbounded (because of almost unbounded entity numbers) satisfaction of society as a whole, currently about 8 billion individuals. We need to move past egoism. I don't think an egoistical civilization, as was Western Society for much of the 20th century, can reliably go much further than we've come (see: climate change, rising political instability, fluctuating inequality, stagnating quality of life).
I'm not arguing for any political system, I'm arguing for a cultural-social-technological outlook of the entire society. I'd label it 'Universalism' (but that's taken), so perhaps 'Conscious Universalism', or 'Concious value universalism'.
That's how we move our entire civilization forward, achieve better political stability, how we're able to tackle mega projects like engineering the climate and rethinking our global supply chain, how we can allocate massive resources to space exploration, space colonization, prevention of extinction events (like asteroid impacts, etc.), how we move definitely past threat of nuclear annihilation (a nuclear conflict, still not completely out of imagination, could perhaps still collapse society).
How to do it? I think part of it is simply enlightenment, discussion, writing and reading; the other is indeed recreating our institutions, including our economic and political systems (focused around this goal).
This century is when we decide whether we become the Borg or the Federation. Cylon or Human. Dalek or Doctor. (or just slowly collapse... hopefully not; we have potential)
This way of thinking, as you draw the lines into the future, correspond to rational analysis. Could be called quality of goal-sets, as opposite to "crowing on a pile of dung" as is now the ongoing mindset of our elites. ...fascinating and not depressing the observance of reality, your way would be beyond and far of what is now passing for science, politics, societal engineering, technology layers without a grand design. It is not going to happen, goes against the history of mankind pointers. You Sir, must be one of the few, your status thus would reside in other then wealth and ego, you posess the suicidal gene!
To be clear, by conscious experience I don't mean just pleasure (or even just "happiness", just joy). Experience is much more complicated than pleasure alone, or any single feeling -- although of course generally they are good proxies in most situations (if you're happy it's usually, but not as a law, having good experiences).
I'll leave it as a reflection to the reader exactly what it entails -- with Humboldt's observation in mind (of "harmonious development [...] to a complete and consistent whole).
Another important observation to be had, is that we should have freedom to chose, in a way, what gives us pleasure, what's engrossing to us -- guided by reason. I call this concept 'freedom of utility' -- sure, we (generally) enjoy physically food, sex, and various other things (sometimes drugs that destroy our bodies and our minds); but what should we like? Imagine our intuitive tastes could be aligned to changes since we've developed millions of years ago, living in totally different environments under new light of reasoning, and new understanding of the universe.
Overpopulation is a problem. People will claim that the plant can support 20 billion+ people but they conveniently forget that these people will have an incredibly low standard of living.
Even if we were to assume that an arbitrarily low standard of living is acceptable, at some point that standard of living will include mass starvation and death so there is a real capacity limit. Being well below that limit is a virtue.
That I'm definitely not convinced of. With current levels of technology, poverty in a global setting has more to due with infrastructure and unrest than it does actual shortages. I suspect that we could pack on even more people with the industrialization of the third world, vastly increasing productivity.
More importantly, there's absolutely no reason to think that technology will somehow stop pushing the carrying capacity further. More people will yield more innovation, yielding more growth, yielding more people. If there's an endpoint to that, I doubt we'll see it any time soon. Yes, I too see how insane it is to think that sometime relatively soon global GDP will double twice in a year. But twice in a decade would be just as crazy 500 years ago.
There's certainly people who eschew technology and live in a historical fashion. They might even be happier for it.
If there is a natural (not physical out of resources) limit where humans feel satiated i doubt we're anywhere near it. If we do hit it, wait a bunch of generations and they'll be more humans.
With exponential growth, we might be closer than you'd think. Clearly people always want more, but the rate at which we want more seems like it has to have to have a limit. At the very least, it can grow faster than the population can (almost automatically, since more people increase growth as well).
People might still be more unhappy even though society at large delivers them things that could be unimaginable today. The creators and owners who can deliver that future will be richer than everyone else (rightfully so, imo) and that divide is what I think could make people more unhappy although they will be much better off than what we are today.
That's why the article goes on to explain how to fix that. I believe they are two independent points, the taxation proposition is independent from the AI revolution and could be applied today. What the article argues is that the AI revolution would make wealth accumulation so massive that we will need laws and taxes, and new ways of looking at the world.
I doubt that. People who are ultra rich use money as a proxy for power, influence and status. In a post scarcity world money likely won't be a great way to attain status so the hope is that status will be obtained through other means like creative expression or charisma.
What ai wil change though is the ability of large categories of labour to win what they need on an open marketplace. I think this initiative is trying to anticipate that.
yep... expectations scale with wealth. if we set a standard of living around that of ~100 years ago there would be "enough" for all. more likely outcome is that wealth disparity remains about the same (or gets worse) but everyone is a bit better off
> We had enough wealth for everyone to have what we need a long time ago.
That's obviously not true today: There isn't enough coronavirus vaccine to go around.
There almost certainly will be enough eventually, but human beings live in the now. There will be another pandemic someday. Or some other natural disaster that creates localized or temporal scarcity. We can't just spin up a new lifesaving drug or a million new homes overnight. Maybe someday we will?
> There isn't enough coronavirus vaccine to go around.
That is mostly a question of regulations. The part that takes so much time is getting the vaccines approved; many researchers don't even try because they know they wouldn't have enough money to get their vaccine approved. Also, most governments negotiate hard to reduce the prices, despite the fact that economic damage from lockdowns is much greater.
"This revolution will erode enough biome for every human to be dependent on the industrial complex to survive, while animals are basically left to starve."
His workers are poorer because his company's enormous valuation comes from the surplus value they produce but do not receive as compensation. You might be poorer if you own a small business those workers would frequent if they had more money.
Edit: You might also be poorer if you tried to compete with Amazon and were crushed like a bug by their anti-competitive practices.
Wrong mode of thinking. The problem is that there aren't enough alternatives. If you want an economy that is fair for workers then you need more jobs per worker so the worker can choose the best offer. That also means you want more employers, including the Jeff Bezos types.
Labor theory of value lmao. Amazon workers are richer because they've been given an opportunity they otherwise wouldn't have. If they could have a better job they'd take it.
How about his workers are richer because they receive a portion of the value they produce, because by combining their labor with Amazon’s capital they can produce vastly more value than without it; and if people didn’t get their share of value from producing and renting out capital they wouldn’t do it and there wouldn’t be any capital to use.
Sorta. Wealth like that is control and power, and while governments theoretically have absolute power over business, actually using those powers can break things. If the government owned the same shares, you would have slightly more direct control over what Amazon does in practice. Probably. Perhaps.
I respect Sam and believe that his thinking is usually a few years ahead of mine - with passage of time, I generally tend to agree with his statements more and more. He seems really bullish on AI, and while he might be a bit biased based on his current role, I think that his public statements are truly authentic because he's putting his money where his mouth is (he left a coveted role at YC for his current role at OpenAI).
That just makes me worry that we're likely to enter an AI cycle in entrepreneurship that will be characterized by high barriers to entry for new entrepreneurs. I have so many ideas for future companies, and none of them will be feasible if in each vertical I have to compete against companies with mature AI capabilities. I suppose there is a possibility of someone offering AI as a service that will be good enough to go up against the FAANG companies, but I imagine that anyone with such capabilities might be more likely to just own the whole opportunity (eg: Red Antler switching from helping startups with brand building to Red Antler building their own brands). I think there are still opportunities out there for a VC-backed company that starts out in a garage, but that window seems to be closing - for both, entrepreneurs and VCs as well (FAANG companies are not going to take any venture capital).
I really hope I am wrong and would love to hear counterpoints (apart from the usual "they have been saying this for the last 50 years and it always turned out different"). It's important to realize that 50 years ago the world was very different for entrepreneurs, and I have no reason to believe that the current cycle is just that - a cycle that will pass and that the world will return to its previous state of making it exceptionally difficult to go up against big companies. The best counterargument I can offer is PG's essay on the history of corporations and how startups will become a permanent part of our future (don't have the time to look up the link, but hope that someone else can find it).
I envision the complete opposite. In 10 years I think there will be many companies who offer high quality AI services.
I'm not worried that these companies will just have their cake and eat it too because there's bound to be someone who would unbundle the AI from the product.
I even think it's more valuable to do that than building your own AI-powered product because then you're a service provider, which generally seems like a cushy place place to be.
"This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly."
I worry that this is already the case, and we are already failing miserably. Globally we seem to have enough resources to feed, clothe, and shelter the global population and in a number of cases (see the USA) to be unable to do so.
Are we failing "miserably"? I mean, global poverty is down, down, down.[0] Famine mortality is down, down, down (in spite of population going up, up, up). [1] Not everyone gets an Escalade and a 5k square-foot home, but arguably they shouldn't be using those anyway. But it seems like in terms of what people "need" (food, shelter, clothing), globally humans are enjoying unprecedented prosperity, despite the enormous gaps that can and will exist - the mean seems higher. I'd call that improvement, not failure in the immediate sense, though of course this is all coming at a price to the environment whose balance due is only starting to be realized.
We're improving rapidly, but I think we need to set our expectations higher. According to givewell.org, it only costs between $3000 and $5000 to save a life. There's a lot of people who could give that amount and don't, so there's a lot of lives that could be saved that aren't. And that's a pretty miserable failure to me.
We can always do better, that's for sure. Charitable giving is massively high in the US as a percentage of GDP though[1]. Individual giving is the highest source of that money[2]. That's a testament to something good, I think. That more people could give more and don't is a failure at an individual level, but systemically the globe is reducing poverty on its current track.
That's fascinating.. so human lives are worth more, or there's more friction to intervention these days? Hoping it's the former. But curious what you think the explanation is for this. Just a reflection in standard of living, and so the cost to save has a higher standard?
Food insecurity is still a thing, but the only mass starvation is driven by conflict in hard-to-reach places like Yemen, where you can't just easily ship food and save a million lives.
Now, the most effective aid interventions are campaigns like de-worming and Malaria; but those are more of a QALY calculation, where you de-worm 100 kids to prevent serious disease in some subset of them. Which overall drives the cost up, but is actually a good trend.
I think it's more that the lowest hanging fruit have already been picked. In other words, all the lives that could be saved for $200 have already been saved. If I'm right about that it would seem to be an unambiguously good thing.
> “We could do something called the American Equity Fund. The American Equity Fund would be capitalized by taxing companies above a certain valuation 2.5% of their market value each year, payable in shares transferred to the fund..”
I could be in favor of something like this.
However, I’d be curious to hear Sam’s thoughts on what kind of vehicle do we use to ensure that this equity actually reaches end-users?
I can make a very strong historical case that the government is not the right vehicle for this to work. You could also just look at the most recent $1.9T stimulus bill — where only a fraction of it went out as checks to Americans in need.
I feel like unless the words "as checks" are doing a lot of work, the implication of your last sentence is not true. A sibling comment posted a link to a wikipedia article which says:
> The bill's economic-relief provisions are overwhelmingly geared toward low-income and middle-class Americans, who will benefit from (among other provisions) the direct payments, the bill's expansion of low-income tax credits, child-care subsidies, expanded health-insurance access, extension of expanded unemployment benefits, food stamps, and rental assistance programs.
Here's [0] a more direct breakdown of there the 1.9T went. A large chunk of the money was spent on the $1400 checks, extending unemployment insurance (which come as checks/direct deposit), and the child tax credit (which really will just be realized as another check). The majority of the rest goes to state governments (who will probably redistribute some of it), K-12 schools, and "energy and commerce" which supposedly includes contact tracing efforts and vaccines. Doesn't seem like a big misallocation of resources to me.
> AI will lower the cost of goods and services, because labor is the driving cost at many levels of the supply chain. If robots can build a house on land you already own from natural resources mined and refined onsite, using solar power, the cost of building that house is close to the cost to rent the robots. And if those robots are made by other robots, the cost to rent them will be much less than it was when humans made them.
The issue with rising costs of housing is not (completely) linked to labor costs, it's land value, regulatory capture, bad infrastructure, and heavily marketed house-in-the-suburbs-as-the-only-way-to-live.
Construction is a big part in some locations but it doesn't make housing unaffordable, merely expensive.
From memory:
Tearing a $2 million single family house and putting 6 apartments there would allow you to charge $3k rent. Build taller and rent drops even lower. This is assuming construction costs of $500 per sqft.
Rising costs of housing is, almost entirely, due to the fact that individuals are allowed to own more houses than they can use, while most people don't own any. It seems fairly obvious that if you control the housing market, a human need, you can set prices as high as they will possibly pay them.
I am more and more amused by Sam's ambition of describing so-called plans for Humans, while at the same time is 1 year younger than myself, who had co-founded a failed startup loopt based on sharing location information; and joined YC because of largely Paul Graham and him being somewhat liked each other without much deep connections, and eventually become chairman of YC; then now co-founded OpenAI as the one working business side of the org.
I don't think Sam is using his time and energy wisely. But I could be wrong. Who knows.
Edit: to answer the question in the reply.
I don't know what he should be doing, exactly.
But Sam probably would do more value by focusing on his work at OpenAI and figure out a way to make their work more "open" and "accessible" to puny humans who does not possess a billion dollar computing cluster.
I think the foundation of the argument about the application and utility of AI needs to go deeper. A good starting point might be to address the arguments that Arvind Narayanan brings up in "How to recognize AI snake oil"[1]. ie:
- "Much of what’s being sold as 'AI' today is snake oil — it does not and cannot work."
- "AI excels at some tasks, but can’t predict social outcomes."
The way that AI "joins the workplace" matters a lot to discuss the reciprocal policy. AI progress in certain domains has been amazing, while other domains may require a philosophically different approach to leverage computation and intelligence for true productivity gains.
As with so much Silicon Valley stuff, I think the latest evangelizing for AI is probably just laundering military and counterinsurgency tech as some kind of utopian consumer godsend. Obviously AGI will remain a pipe dream, but what we will get is autonomous robotic soldiers with no conscience (that can be counted on to put down unrest without questioning their orders), or listening software that can monitor everyone's communications to identify targets in real time.
There was never in history a quest for the benefit of what is now a surplus population that has as only asset to pollute, contaminate, be parasitical and cannibalistic. Whatever stands for AI(no real definition in the lead text, just suggestive blabla), will be at the benefit of the immediate and power of the established few. Our "elites" are, were cockroaches. Between them and the latter surplus population there is a margin of whoring societals with some wackoo agenda not surpassing their primary drifts. The lower on the food chain, the cruder the desires for basic comfort.
I think the end goal of human society is full automation and immortality. In the process of coming to that there will be all kind of problems, challenges, dramas and chaos. But once we are there it's kinda all gone and done.
The ends justify the means? Forgive me if I don't agree that some small sliver of sociopathic elites getting to live forever as nanite clouds or whatever excuses genocide euphemistically referred to as "dramas and chaos."
For example wars, colonization, and inquisition were dramas and chaos but after them and sometimes with the help of them came progress and innovation.
Medieval Church had monopoly on education and knowledge but still believed earth was flat and prosecuted brutally anyone who opposed them.
French Revolution was bloody and messy but it brought democracy and decentralized education which afterwards led to tremendous progress and innovation.
My main thought after seeing Elysium was "if the robots are so advanced, why not use them to help people?" Healthcare is just a big as the military, so why would the robot makers turn down that opportunity?
I know you are being sarcastic, but I don't know if everyone understands Sam Altman's motivation for running OpenAI the way it's being run. Sam is trying to create AI which causes the greatest benefit for humanity. That's openAI's mission. Open sourcing everything now would not achieve the mission.
In case the "why" is not obvious: AI progress is limited by a) great research talent; b) money -- specifically being able to invest in compute. If OpenAI were to open source everything, they would not be able to raise the money they need to invest in compute, which would cause a death spiral in their ability to attract and retain their researchers. They need to have a story for why they will make money in the short term to continue being a top tier AI research org. And since AI is "winner take all", it is likely worse for the world if a less altruistic company takes all the talent and source code.
If your point is just that OpenAI is a misnomer now, I agree :). It's not open. But I do think they have settled on a surprisingly good point in solution space (the capped-profit company, the charter, etc); I don't see ways to validly criticize the company from an altruism perspective.
AI progress is not limited at all, it's the fastest moving research field in the world (5x improvement in efficiency / year for training a task to the same precision for the same cost if I recall correctly, far better than Moore's law).
OpenAI is opening the world to AI and helping people just like Google's doing ,,no evil'', Facebook is connecting people. At the point when an organization gets big enough to not keep its original values (being open for OpenAI), it's not better (less altruistic) in ,,making the world a better place'' than any other organization. Competition and having the power of AI distributed in more companies is good though (until they acquire each other).
Here is my perspective working in industrial automation field (think of these for 4 points as matrix):
1) Things that are easy to automate are already automated.
2) Many things that are difficult to automate but increase productivity a lot are already automated.
3) Many things that easy to automate and increase productivity only a little bit are already automated.
4) Things that are difficult to automate and increase productivity only a little bit NOT automated.
We constantly make advances in automating things, but the low hanging fruits and high productivity gain things are already gone. It's not so clear to me, that our advances are keeping pace with the things that are left to automate. I feel the point about exponential growth is completely non-obvious. What if the tasks we are automating also get exponentially more difficult and/or the productivity gains decrease exponentially.
OP’s thesis is that AI sets us on a new exponential curve that dramatically increases the ease of automation and the range of things worth automating. At the same time, your point about other exponential effects that offset the gains is thought-provoking. As other comments point out, the Moore’s Law in computing hardware has not translated to a Moore’s Law for software usability or productivity. It’s not guaranteed that a Moore’s Law in AI will translate into a Moore’s Law for everything.
Sold on the premise that when AI gets here it will change everything. What I don't have a good grasp on is how fast it will come. Recent feats of AI are very impressive, but it's hard for me to put it on a trendline that would line it up with massive changes coming with 10 years. Predictions around AI have made similar claims for the last 50 years. Why is it different this time?
I'd recommend the book Life 3.0, the author surveys a large number of AI researchers to answer this timing question (I think 95% said AGI is guaranteed in the next 50 years iirc), and also discusses why this time is different than the times in the past, like in the 60's when a group of researchers thought they would make significant progress towards AGI over the course of a summer
It's a bummer that while he starts out talking about a global revolution that will profoundly affect all human beings, he smoothly transitions into tax policy opinions and suggestions that, in a best case scenario, will affect about 3.5% of human beings (Americans).
Most of the world is not American, and for every American, there are around 19 people who are not.
> If everyone owns a slice of American value creation, everyone will want America to do better:
Americans say "everyone" when they mean "Americans". "Everyone" is actually 20x larger. (In a section on inclusivity, no less!)
"Even more power will shift from labor to capital."
You can say the same thing for machines replacing workers at farms, but hardly anyone would rather ban tractors for taking people's jobs
You can say the exact same thing for bank tellers replaced by ATM's but no one wants to wait in long lines to withdraw money and pay expensive service fees
The list goes on and on
Google maps (how often people need physical maps anymore?)
Gmail (goodbye to a lot of physical mailing service)
Excel (1 accountant can do the work of tens of more accountants of the past)
Forklifts take away many body breaking jobs
Jobs do disappear, but very few people would rather go back
True but you have outfits like LabGenius using exponential tech and ideas to go around that, which we’ll start to see the fruits of in the 2030s: https://labgeni.us/
Sam, you have a lot of great ideas and a lot of assumptions in this essay. Just like the parameters in deep learning models, we can’t know what will work in different scenarios until we can model it reliably. Time to build a worldwide game, version of the Sims if you will, to test different assumptions and global activity based on them. Happy to help on this too.
Some big assumptions:
- Childcare can or cannot be handled by robots (very likely not if you need to raise healthy humans).
- Healthcare can or cannot be handled by robots.
- Humans will enjoy lack of employment without mental damage and how to retrain or provide for those needs.
In the game each player should be assigned a random individual with different roles in society - so they can see all angles (from different speeds of income accruing, to health, to time demands of their job, to responsibilities that need handled, etc). You will see all kinds of bugs that way - from individual liquidity crunches, to mental breakdowns, to industries that need more innovation/AI.
You can run any assumption in different epochs and get the answer by humans who play along worldwide.
That way when the rules are about to change due to an innovation, we can run a fun game simulation instead of running the risk of anarchy which kills a lot of people.
It is critically important to think about how we structure the future based on the technology we unlock. My pull request on this essay would be to propose we build a test suite for that first. Maybe we can use AI to simulate outcomes too after a good number of human runs.
I like this idea, but is the technical implementation of a progressive social policy (whether it be a tax on equity versus something else) actually the hard part?
Alternative take: the hard part is that the US is heterogeneous and people just don't trust each other to not abuse benefits. (You could also say it's racist). How could we give equity to every person when we can't even seem to agree that they deserve basic healthcare?
This is definitely good thinking, though it seems unrealistic.
The stuff about AI is just science fiction. We'll see major advances in the next 15 years, but it's unlikely that there will be a compounding intelligence situation where robots are able to program and build other robots without human intervention. There's no scientific evidence that this will happen, but it's certainly possible on a long enough time frame.
A 2.5% property tax is a political impossibility. That's like $500/mo for the average homeowner. And homeowners vote. This would only happen in the most dire of economic circumstances. The remote possibility of super ai in the coming decades is not one of those circumstances.
I'm also not so sure about the idea of taxing corporations based on their market cap. Market cap isn't "real" taxable money like that. It looks like there's been research published on the idea [0], but it's kind of just one person.
It's not a property tax (which taxes the building and the land), it's only a land value tax. So if you have a $300,000 house, it's not going to tax 2.5% of the $300,000 house, just the $50,000 worth of land it sits on.
This is an absolutely horrible idea. As Raoul Pal always says: "Capital will find a way. It always does.". Let's take a look at some examples with real estate:
George owns a flat, he lives in it. He owns no other properties and he works a low-wage job What will happen? He'll pay the 2.5% for a while then he'll be forced to sell.
Jane owns 5 houses and lives in one. She doesn't have a job, he lives off of the rent payments. What will happen? Jane will raise the prices by 2.5% every year.
Plot twist: George is now renting from Jane. Since George works a low-wage job he'll have to move out after a few years to an even smaller apartment, and at some point he'll be practically homeless.
The same is true for companies. They'll just raise the prices by 2.5% creating a price inflation that will hit __everybody__ who doesn't have a company or real estate.
Real life example: where I live there is a government program that will let certain kinds of people (newly wed couples) to get their VAT back from buying a house. What happened to the market? Real estate prices went up by 5% and everybody who is not in this bracket now suffers.
This is predicated on taxing capital and giving to labor, once labor is no longer necessary for capital.
This isn't going to happen, absent an _enormous_ crisis to precipitate this kind of change. No way are the establishment going to allow this kind of thing to happen ahead of time. The kind of political engineering required to do any of this is far beyond anything that the US government is currently capable of.
> there will be dramatically more wealth to go around
There is already a dramatic amount of wealth to go around and yet it is held by fewer and fewer individuals and organizations every year.
Just like there is enough food in the world to feed everybody.
Our world is not optimized for distributing resources evenly. Can AI change this?
A little off-topic but I’ve been thinking about technological progress and its impact on inflation recently. In 2018 Jay Powell partly blamed Amazon (and others) for pushing prices down so much that the fed wasn’t able to hit its inflation targets [0]. Isn’t Sam’s vision also inherently deflationary? If consumer prices keep dropping due to technological progress, shouldn’t we keep printing money?
I’m still a little sceptical of Sam’s vision coming to pass, but if it does, it’ll have some weird consequences on monetary policy.
I was thinking the same how deflation is good for society? Reducing costs of production, increasing efficiency and having decent market competition brings prices down and technological progress enables all of that.
Not anytime soon prices will be so low that all people can afford everything they want. Luxury goods and premium pricing are here to stay.
Maybe he was referring to Trekonomics: The Economics of Star Trek [1]. Post-scarcity moneyless society.
Here is excerpt from Wikipedia: "The third chapter talks about the replicator, the machine that makes Star Trek's post-scarcity possible. Post-scarcity's meaning is the infinite social wealth. The replicator is as a metaphor for automation, and an endpoint of the industrial revolution. Crucially, in the Star Trek's society it and its produces are public good."
I'm all for driving down the costs of things low on Maslow's hierarchy, but it's arguably more reasonable in the short term to put effective public policy in place than hope the singularity gets here (an exaggeration, but not exceptionally so, considering "Moore's Law for Everything").
Tech people keep trying to fix people problems with tech. ~47k people die each year in the US from a lack of healthcare. Other countries don't need Moore's Law to fix this, for example [1] [2]. Conversely, it's fine that Elon runs around as Technoking as long as the batteries are pumped out of Gigafactories at full speed. Technology fixes for technology problems, people fixes for people problems. We don't need more wealth ("The future is already here — it's just not very evenly distributed" -- Gibson). America is one of the wealthiest country in the world. We need quality of life floors and more equitable distributions of what passes for and enables wealth.
With all of that rant said, I really love Sam's idea about the American Equity Fund [3]. It's long overdue, and something that the Federal Reserve could administer today with FedAccounts as the target of distributions from taxes on productive concerns. Sam's a smart person, and I hope he can sell the idea with a pitch deck to those who need to be sold on it. The issue of equity (social and economic) has reached a crescendo, and it would do a disservice to county and citizens alike to let the opportunity go to waste.
Could it be that tech people try to fix problems with tech because tech people are familiar with tech?
Said another way, where are the non-tech public policy people solving these problems?
If they don't step up, maybe it's time the tech people do?
Absolutely. This is not condemning technologists, but encouraging a reassessment of effective strategies for implementation to lead to the desired outcome.
Policy is written by the elected. Speak to or assume those roles. Provide covering fire for effective contributors who can execute on your mission and vision, just like a startup.
I think that this essay assumes the possibility of infinite growth. That runs into the reality that we are actively running into the limitations of our interactions with the natural world. What else is climate change but an indication that we've reached the limit of what we can produce using our current technology. Now, it is possible that we can find a way to reduce our impact on the earth while continuing to grow, but I'm not sure that we can do that AND generate the astronomical growth this essay requires.
This is basically the generic singularity blogpost that comes out every few months or so. And at this point could probably be written by GPT-3, would be a nice experiment if someone could tell the difference
Either way two major things wrong with this. First off, there isn't actually a whole lot of evidence that we're living in the most innovative time in history and that the robots are coming for us. Productivity growth is low, employment is high. If technology was eliminating labour, the opposite would be happening. We'd be growing at 7% per year while we'd have bazillions of unemployed people roaming the street.
Secondly, and this is very typical for SV liberalism/centrism there is absolutely no understanding of power in this article and we just solve things by doing 'the right policies' which is 'simple' and then everything is fine. Of course if that was so simple we'd already be doing it to begin with.
We don't need some futuristic 2200 utopia to solve child poverty. We actually could do all the things mentioned in the article literally right now. You could have been taxing the shit out of land in 1800. The question Sam Altman needs to answer is why the technolords of the future don't just simply hire some Terminator Pinkertons to mow down everyone who wants to get their hands on some of their riches.
Society is currently not ideally structured to realise much of human potential (setting aside how some individuals with immense initiative will nevertheless succeed). Piketty has described how wealth concentration arises and is perpetuated and has drawn a link to reduced overall productivity. He has also proposed a wealth tax to circulate cash and resources to combat stagnation and inequality - your tax proposals on capital seem similar. A change in the distribution of income across society might lead to an uplift in the origination and development of new concepts and ideas, but to tackle some ideas will still demand hubs of resource concentration in some fields, e.g. CERN, pharma labs, space exploration, quantum computing, etc. Perhaps enabling many more nuclei of independent development will result in a snowballing effect which will ultimately create more of these large scale resource hubs, but even so I think that there will have to be an adjustment in the collaborative protocols we use to really significantly increase the growth, development and employment of talent and potential that is latent and currently wasted. Still, a great and relevant article.
One of my takeaways is that growth cannot exist forever; there is a thermal bound to how much energy (economy = energy consumption, if you reduce it enough) we can produce and consume. Another commenter posted that if you zoom out enough, economic growth is exponential. I tend to agree, at least backwards-looking, so I think of intervals of economic progress as "doubling" (ie, logarithmic instead of linear).
We only have a few more doublings before we hit some serious thermal discomfort. The "AI Revolution" as dreamed in the OP I think is largely impossible: if the AI/Robots/Whatever get sufficiently advanced they will require orders of magnitude more energy than we already consume, which would run the risk of cooking us all.
I would rather see someone or someones trying to break the economy = energy paradigm. At some point, we will be unable to generate more useful energy; I'd like to see us do more with less.
I read this article a couple months ago and found it incredibly bizarre (to the point of wondering if the economist was actually real or just an invention of the author). Firstly, it spends a lot of time analyzing the consequences of exponential growth in energy usage over the next few centuries. But Google [0] will tell you that energy usage has barely moved between the beginning of their data of 1960 and the end of 2015 (and has actually been on the decline since its peak in the 70s).
Thankfully they then move on to discussing what will happen if energy usage continues not growing exponentially, when the physicist says this:
> If the flow of energy is fixed, but we posit continued economic growth, then GDP continues to grow while energy remains at a fixed scale. This means that energy—a physically-constrained resource, mind—must become arbitrarily cheap as GDP continues to grow and leave energy in the dust.
Then, to clarify, he says:
> Energy today is roughly 10% of GDP. Let’s say we cap the physical amount available each year at some level, but allow GDP to keep growing. We need to ignore inflation as a nuisance in this case: if my 10 units of energy this year costs $10,000 out of my $100,000 income; then next year that same amount of energy costs $11,000 and I make $110,000—I want to ignore such an effect as “meaningless” inflation: the GDP “growth” in this sense is not real growth, but just a re-scaling of the value of money.
No! This is not what economic growth means! No economist in the world should agree with that. It's possible that electricity will be too cheap to meter, or it's possible it won't, but some commodity increasing in price at the same rate as GDP doesn't somehow neutralize the possibility of there having been economic growth. Here's an example. Let's say widgets currently cost $2 to make and there's demand for 1000 widgets/year. If I find some innovation to save $1 on the per-widget production cost, that's $1000 of economic growth. If that innovation uses slightly more electricity (and let's assume the production of electricity is fixed), we'll bid up the price of electricity some amount and displace some less-productive use. That already happens for other finite resources like land (which in some cases leads to exorbitantly high land prices in areas like silicon valley, but that wouldn't mean there's been no growth).
This is a story about a billionaire who imagines a rosy future full of unicorns and rainbows. What is the point of more wealth created by AI or magic genie from the lamp when it will end up in the pockets of the same people as always?
Even now, there is plenty of wealth, but it is concentrated in 0.01% of the population.
So, there is a straightforward question for Sam, why don't you give up your wealth now, same as all billionaires worldwide?
So spare me of the long rich guy "moral" story, answer is simple neither Sam or any billionaire around the world will. As usually when they do they do it through tax loophole relieves schemes, their money will never get to those who actually may contribute to this world if they could just free their time from the rat race.
So, when stronger AI comes online, there will be no fairness. All the immense riches will go to the same minority - that same minority or wealthy people will not give away as they will as always think that they are entitled to it. Same as Elon Musk believes that Tesla currently has a "fair" valuation.
... Fairness is a tricky word; it depends on the head, which is pronouncing it ...
Many basic resources are now almost free in the sense that the market price barely covers cost of production/collecting. Vegetables and certain farm products are examples of this. Slightly out-of-date electronics. Many more things are like this. It reduces living costs.
This has had no effect on politics wrt to wealth distribution. Rather, decreasing living costs has resulted in increasing inequality because it reduces the perception of need for change.
AI in production brings a similar change to the economy as when Chinese labour started producing goods for the world. It will make shiny toys available to everyone, but it will have no effect on wealth or income inequality.
So, AI will result in even more "homeless" people with not only iPhones, but an abundance of all sorts of toys. Maybe even houses.
"We could do something called the American Equity Fund. The American Equity Fund would be capitalized by taxing companies above a certain valuation 2.5% of their market value each year, payable in shares transferred to the fund, and by taxing 2.5% of the value of all privately-held land, payable in dollars."
He is talking about taxing unrealized gains, which is quite unfair. If the share price of a company doubles, it doesn't mean the company has extra cash to buy back shares for redistribution. Same with land. I feel property taxes today, as a percentage of market value, are insidious precisely for that reason. Increased demand for housing in my area doesn't mean I have more cash to pay taxes. Not until I sell the house and realize that gain.
> Increased demand for housing in my area doesn't mean I have more cash to pay taxes. Not until I sell the house and realize that gain.
Right. As I understand it, that's part of the point (for economists, anyway): It incentivizes the allocation of scarce resources to those who value it more highly (and thus, presumably, to those who can make more productive use of it).
Now, I don't want to give the impression that I am on board with this argument, since it implicitly incentivizes the concentration of property ownership into the hands of those who will rent it out, rather than those who will use it. Landlords who are faced with a growing tax bill don't have to sell the property to realize their gains, they can just raise rents (not to mention using it as collateral for a loan to purchase more property).
In order to make the incentives align correctly for more participants, you would also have to establish a pretty robust system of rent control to keep spiraling property values from pricing existing homeowners out of the market, and incentivizing redevelopment: if you can't raise rents arbitrarily to cover your property taxes, you have to build more units to rent (but not so many that a glut of available units depresses rental income).
The article mentions the solution as regards companies: they pay in stock and can do so via a fresh issue. The situation regarding property is less easy to resolve. Most likely some sort of reverse mortgage product would work.
That doesn't seem true. Picketey's (probably misspelled) "capital in the 21st century" has a compelling argument for exactly this kind of tax. And my realization that I can't see it implemented ever, is about as chilling to me as climate change in terms of a real existential threat that we as humans see coming but will be unable to stop, because it requires cooperation that seems impossible.
The basic argument is, I'm hoping I'm getting this right:
The return of capital far outstrips return of labour, that difference (if above a certain percentage) is a transfer of wealth from labourers to the capitalists. Return of labour is basically GPD growth, return of capital is income from rent/investment/interest and all of that.
This increases inequality over time leads to destabilization of society, I'd argue you can see that in the USA quite well. And starts happening in rest of Europe more slowly (not quite as capitalistic as the USA but heading there nevertheless). Wealth building for the bottom 50% is extremely difficult (house prices for example)
2.5% is a significant percentage, but rate of return is around 5% or thereabouts, basically a 50% tax if you make the average and continue investing (i.e. it's presumably useful to society). Once you "sit on your money", it would reduce the wealth slowly (no longer useful to society). I.e. if you keep investing, the government will still have less wealth than the capitalists no matter how long you do it. It won't be as a significant increase in national budgets as one may assume.
Also, "government owning things", the idea is that that money is redistributed to the people via benefits (unemployment, education, health care, ubi etc).
I very much like the idea of Moore's Law for a lot of things (maybe not for everything, there could be some nasty surprises in games of catch-up between offensive and defense).
Setting aside social implications, it might also not quite come to this, as not everything might be solvable by data and/or thinking. Our understanding of nature could be correct enough in some areas to block progress. For example, it might not be possible to predict which atom will decay next, or maybe there are no gravity shields possible in this universe as it is a fundamental property of mass (very very handwaving here).
We already have a federal target of 2% inflation and a 20% capital gains tax, which add up to about a 2.5% wealth tax.
Also, if the government starts owning shares in equities, they may as well pump and dump the share market directly like they do with bonds.
Also if you're going to design a utopia per country, why not just make it a global utopia, and model it for the 7 billion humans instead. Same for equality, model it for the global population instead of the 5% living in the usa.
Will the stock certificates that are inflationarily printed (or virtually printed, issued) to “pay” regular people in ownership have any value though? Are we being real here?
Technological innovation is a balance between diminishing returns (ever-smaller niches getting filled, ever-increasing complexity) and the occasional breakthrough which spawns further development. The visual metaphor of bacteria developing antibiotic resistance in ever-greater levels of the antibiotic, springs to mind:
I don’t see the world this piece presents panning out. Either we make general intelligence (which then scales almost instantly to be way, way smarter than any human ever and then accidentally kills us [1]) or we don’t and humans still have a role to play in the workforce.
> Economic growth matters because most people want their lives to improve every year.
Improve how? Should I need economic growth to get better health care? This whole techno-utopian argument seems to hinge on extractive growth because it fails to actually tackle the problems of inequality by providing true redistribution of wealth in any meaningful sense. Trickle-down AI is a sham.
This essay cites no sources and provides no data to back up its big claims and projections. It’s interesting to hear what someone close to emerging technologies is thinking, but big claims require big evidence. It reads like reading Marx, or Kurzwell, or other utopian “futurists” more than anything practical and realistic.
As an engineer you want to solve problems with systems. The world is too complex for that. It is doomed to fail, which creates an endless cycles of fixing it up, which basically politics is.
Can it be improved? Sure, go into politics.
Can it be solved, oh good no!
We still have collective nightmares of the attempts.
We’re arguably already in the AI revolution, and I haven’t seen things get much cheaper. Most everyday prices go up with inflation, like they always did. UBI is a wonderful idea but I’m not sure there is currently enough wealth for a scheme like that, even on small scale.
> A stable economic system requires two components: growth and inclusivity.
Stability and continuous growth would seem to be incompatible, surely?
> Economic growth matters because most people want their lives to improve every year.
It may be true that most people want that but so what?
I'm a big proponent of everyone having a decent level of quality of life, that's not the problem. The problem is taking a child's view of reality and trying to make it stick when it comes to deciding policy. We can't "improve every year" forever. What does that mean? A larger house each year?
I can understand saying that as a political move to get people on your side, but it's irrational.
We must design forms of economic growth that respect basic physical reality.
> In a zero-sum world, one with no or very little growth,
But we don't live in a zero-sum world. We live in a bubble sandwiched between hard vacuum and magma that hosts a fantastic chemical tautology powered (mostly) by the energy gradient between the Sun and the rest of the dark sky. This process is not zero-sum.
In order to have a stable system with continuous growth there must also be death and decay.
- - - -
I think we could set up a "split-level" economy that is physically steady-state and virtually growth-and-death dynamic.
The physical economy would be much simpler and much much more efficient than it is today, and everyone would have a decent standard of living within their energy/pollution budget. Most of the wins and losses and creativity and innovation and all that good economic drama would happen in the online virtual economy where it can't cause environmental issues with our Spaceship Earth.
But I don't think you have to make national policy to do that. Buy some land in the desert, build "Village Homes"[1] with some integrated regenerative farms, rent/lease/sell it and repeat. Build hyper-efficient ecologically harmonious neighborhoods. We could do this today, it's self-funding and self-correcting. The major issues there would be putting together the expertise and getting past the red tape. All the technology you would need is off-the-shelf.
> In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work
Wait, medical advice is easier than assembly line work??
Stick to pealing the layer on top of what you see in your daily, AI, for now the only output being some analysis and synthesis for some data that has meaning, and is in the hands of a few, for the few. SQL for human data mongers.
The data in the public domain are one of many, botched, out of focus, wrong datasets, lack of context, a mix of right context, too limited scope to data... as is your own admitted supposition, of what you see is not what you suggest it would mean. Garbage in garbage out, a DdOS on AI is the big one to solve for now?
Add some inevitable layers, individual psychology, societal collective psychology, surplus population and their going rate of psychological settings, the list of variables "known" is endless, even more so are there hiding some very well known "unknowns".
Some serious contenders of raw AI are bluntly omitted, the size of the global population versus the index of resources of the iron-ore ball as is the planet. Relying on "money", a sublimated layer, to account for anything but a tool for social engineering, as is your outright omission to define at all AI, it's reliance on the most infinitesimal part of the few (humans), the outright wrong definition of wealth in it's relativity and dynamics, the USA as a definite part of the planet, derivatives of all and everything, i really do not know where to stop to end the rant.
As a remark to your artisan ready for consumption product page, ...it is not very data searches friendly, it has a very limited scope, it is suggestive of different proven fallacies, and has no definite declared vocabulary.
Are you to blame, of course not, as you suggest yourself AI and not "universal" human genie, as in disproportion of memory and processing capacity is to blame. As long as energy is infinite at the level of AI, the processing versus energy economy of the human brain, as is that even more energy efficient processing brain of say a raven, is largely overpowered in meaning as to the absolute (till now, not necessarily tomorrow), and the nano-technologies and biology of genetics), inferior scalability of human minds.
When crudely put, nano-technology, the biology of genetics (Corona probably), are serious contenders readily to cooperate. Again the case for lack of scope and context of the tease of your blog page.
Publish or perish well assumed, you Sir are desperately clinging to the flimsy single rope, trying not to drown. Jouralists and media, politics build a living on this, it is called a narrative. I am very convinced that you could come up with such, say every week or two.
we find new more specialized jobs, but fewer than we automate, we also create more 'jobs' to replace the jobs that are gone with some completely redundant human activity and pretend its a job
Great, after the great VC pandemic expert reinvention of last year, they've now found a new victim: capitalism. Really wish these VC types would just stick to what they do best: pump money into overpriced startups.
TLDR - Implement a wealth tax and use it to pay a UBI.
Fine, but he should address why he thinks a wealth tax will work this time when it didn't work when it was tried in Europe (due to international competitiveness reasons).
Moreover, what's the rationale for excluding small business ownership from the tax? Why should a middle-class pensioner invested in public equities with $500k have to sacrifice 2.5 percent of their wealth each year, but a rich person that owns a small business worth $50m has no such burden?
IMO AGI is pipe-dream and it will require breakthrough(Darwin/Einstein level) rather than incremental improvements. This means it can happen tomorrow or not happen in 1000s of years.
What's more likely is (post)capitalism for all but I think we need new money and value distribution and funding models for that and projects like Ethereum can possibly deliver this. +1 to taxing land.
While I don’t think it’s a pipe dream, I agree we don’t know how far we are from achieving it, and that it may be tomorrow or never.
I think it’s worth planning for as if it’s imminent, because of the cost of it happening without planning is unbounded while the cost of planning now is perhaps a few thousand PhDs.
I really fail to see how parts of the tech community continue to be beguiled by utopianist notions wrt AI and automation, unless what's really happening is that the already successful techno-capitalists are trying to lay a disingenuous groundwork for them to innovate ever closer towards AGI, unimpeded.
We know what will happen in a future world with more AI and automation that renders more people obsolete: the world at large will increasingly approximate what we already see in the poorest parts of it. Specifically, those with capital and who can access AI and automation will be in charge and have most of the wealth and the rest will look not very different from villagers in Africa or slum dwellers in Mumbai. People who think capitalism is the problem fail to see that there is no alternative to it that is both compatible with human nature and that would also yield a substantially better outcome.
The future is a world in which the masses are pacified by the provision of a minimally-viable lifestyle that keeps them just satisfied enough to nullify the threat of violence. And even that bar will be lowered when policing and military force become fully automated.
There is one curve following Moore’s Law alarmingly accurately: global temperature record. Many say it is direct result of the other curves, industrialism, perhaps even capitalism in general. What about trying to fix this first?
It’s very good to see someone as influential as Sam talking about this. The idea that automation can be used (intentionally!) to lower the cost of living to zero has been a key component of my personal writing and work for some years now. [1]
There is a lot that I agree with here and some things I disagree with. For example capitalism can encourage value creation but it can also encourage rent seeking. Since I have been focused on the idea of lowering the cost of living I’ve come to see rent seeking as a direct antagonist to that goal. Taking something that could be distributed for free and adding a cost to it makes it much harder to bring the cost of life to near zero. The most common way this is done is with intellectual property, and I’ve found several helpful critiques of IP restrictions [2][3][4] that have led me to believe we’d be better off phasing them out and intentionally collaborating with one another.
But my biggest disagreement with Altman here is about the means of producing equity in society. I think the proposed tax and UBI would be good. But I don’t think it is the best solution.
The elephant in the room here is that Socialists have been working very hard for two centuries to try to understand how to organize an industrial society in such a way as to provide a decent and fair life to all.
We’ve completely vilified the notion of socialism in USA to the point that we never even learn about it. Richard Wolff talks about getting an Ivy League economics education in the USA and never being required to read a single word of Marx.
Unlike many socialists I think I broadly agree with what most libertarians believe, and we simply use very different terminology and frameworks for understanding the solution. So I support for example left market anarchists aka left libertarian capitalists like the folks at the Center for a Stateless Society.
But I think the most significant point is one made by David Harvey, a teacher and scholar of Marxism. Harvey says that wealth redistribution is the lowest form of socialism.
The actual point of socialism is not to capture the wealth from capitalists after they have it in their accounts. Instead you change the structure of the organization so that at the point of wealth creation all of the workers and perhaps all of society has some stake in it. The simplest example comes from Richard Wolff, who draws on Marxism to essentially advocate for an increase in worker owned cooperatives. Such an organization could still have whatever leadership structure they wanted, but they would get a vote on who was in charge and they’d make sure pay was fair for all. Socialism is a complex and well studied subject so I cannot relate it all here.
But I will say, Altman’s proposed fund is one way of ensuring all workers and all of society have a stake. I strongly prefer using means of organization that automatically benefit everyone without the means of a state to intervene. For example the state protects intellectual property restrictions for certain qualifying works. If we were to stop providing those protections, then the moment a work was created everyone on Earth would receive more benefit than if it was restricted. We could provide every book ever written for free. Car companies could start sharing part designs and standardize on parts to reduce costs to all. When the patents on 3D printers expired the price dropped from $25k to $300 in ten years.
I am glad we can ask ourselves how to make this world more equitable for all. Please consider that this has been the goal of socialism for as long as it has existed, and if you reading this are from the USA like I am, you may deeply misunderstand what socialism really means. Regardless of your view I think we can all learn a lot from people like David Harvey. Check out his podcast for a view in to his thinking. [5] See also Economics For People by Ha-Joon Chang [6]
> Richard Wolff, who draws on Marxism to essentially advocate for an increase in worker owned cooperatives. Such an organization could still have whatever leadership structure they wanted, but they would get a vote on who was in charge and they’d make sure pay was fair for all
What's stopping such organizations from being formed?
Technically? Nothing. But a culture that forgot the value of labor organizing and embraced the fantasy of rugged individualism needs a fresh education on labor rights. That’s what Wolff does with his organization Democracy at Work:
EDIT: There is more to it than this. There is a whole spectrum of laws common in other countries that could help. For example both unions and cooperatives could help. In Germany it is law that large corporations must have board seats for union representatives. A move like that would help grow labor power and ultimately benefit cooperatives. But there is a pretty significant effort ongoing to quash labor organizing efforts in the USA. Amazon, Walmart, and even Google fight against labor organizers and fire them the first chance they get. They have decided that their profit margins are more important than labor rights, and they’re investing a lot in anti organizing efforts.
I live in a city with high rents. What would it look like if everybody had a shot at fulfilling their economic dreams? Let's say their dream is to live in the center of the city, in one of those flats that now cost 2 million dollars.
So society is obliged to give everybody a shot at that 2 million dollar flat, no matter what their line of work or their qualifications are. How is that supposed to work?
Some things still are limited and will probably always be limited, unless everybody can live in virtual reality in their ideal world.
A land value tax doesn't really change the market dynamics of housing. That is to say, living in desirable locations will still cost more, and will still be allocated via a market to those who can pay the most.
The key thing is that the rent you pay for locations will go to the commons, rather than private landowners, which makes sense, since locations are valuable because of all of society.
So some people will still be unable to afford to live in desirable locations, just as they can't afford to live there right now. The key difference is that even if they can't afford to live there, they will still share in some of the wealth created there, through the land value tax. Effectively, those who can afford to live in the desirable locations will pay some of their rent to those who can't (indirectly).
"living in desirable locations will still cost more, and will still be allocated via a market to those who can pay the most."
But then the demand of "giving everybody a shot at what they want" is not fulfilled. The people with not a lot of money don't have a shot.
"since locations are valuable because of all of society."
I saw that argument in the article, but I don't really think it holds up. I would say the people making a place high value already benefit from that place, they don't need extra taxes to benefit (for example if a place is great because of nice neighbors, those neighbors already live there and therefore already benefit from the place. If shops make a place great, those shops already benefit from the people shopping there. And so on). And sometimes people make a place worse, should the tax then punish them somehow?
"Effectively, those who can afford to live in the desirable locations will pay some of their rent to those who can't (indirectly)."
But then not because those "others" make the place so great, so in the end, what is the rationale? Also, isn't there a ground tax already? There is in my country?
I can agree that since property is a product of guns and more specifically police, and the government provides the police, some money can be asked for the service.
Build more 2 million dollar flats. How high can we build these days? 2 million dollar flats are nice flats. Have robots and AI build as many nice flats as people want.
Let's build a system that's pretty much indistinguishable from socialism, but call it capitalism. That way, when it inevitably fails, we'll blame capitalism.
CGP grey put this well
> Imagine a pair of horses in the early 1900s talking about technology. One worries all these new mechanical muscles (cars etc) will make horses unnecessary. The other horse reminds him that everything so far has made their lives easier - remember all that farm work? Remember running coast-to-coast delivering mail? Remember riding into battle? All terrible! These new city jobs are pretty cushy, and with so many humans in the cities there will be more jobs for horses than ever. Even if this car thingy takes off, he might say, there will be new jobs for horses we can't imagine.
> But you know what happened. There are still working horses, but nothing like before. The horse population peaked in 1915 - from that point on it was nothing but down.
> There's no law of economics that says that better technology makes more better jobs for horses. It sounds shockingly dumb to even say that out loud. But swap horses for humans and suddenly people think it sounds about right.
"Humans need not apply" - https://www.youtube.com/watch?v=7Pq-S557XQU