Hacker News new | past | comments | ask | show | jobs | submit login

> The faster we build the future, the better.

Why? Getting to "the future" isn't a goal in and of itself. It's just a different state with a different set of problems, some of which we've proven that we're not prepared to anticipate or respond to before they cause serious harm.




When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition, especially when the costs of doing it are so low that anyone with sufficient GPU power and knowledge of the latest research can get pretty close to the cutting edge. So the best we can hope for is that someone ethical is the first to advance that technological progress.

I hope you wouldn't advocate for requiring a license to buy more than one GPU, or to publish or read papers about mathematical concepts. Do you want the equivalent of nuclear arms control for AI? Some other words to describe that are overclassification, export control and censorship.

We've been down this road with crypto, encryption, clipper chips, etc. There is only one non-authoritarian answer to the debate: Software wants to be free.


We have a ton of protection laws around all sorts of dangerous technology, this is a super naive take. You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

In general the liberal position of progress = good is wrong in many cases, and I'll be thankful to see AI get neutered. If anything treat it like nuclear arms and have the world come up with heavy regulation.

Not even touching the fact it is quite literal copyright laundering and a massive wealth transfer to the top (two things we pass laws protecting against often), but the danger it poses to society is worth a blanket ban. The upsides aren't there.


That's right. It is not hard to imagine similarly disastrous GPT/AI "plug-ins" with access to purchasing, manufacturing, robotics, bioengineering, genetic manipulation resources, etc. The only way forward for humanity is self-restraint through regulation. Which of course gives no guarantee that the cat will be let out of the bag (edit: or earlier events such as nuclear war or climate catastrophe will kill us off sooner)


Why not regulate the genetic manipulation and bioengineering? It seems almost irrelevant whether it's an AI who's doing the work, since the physical risks would generally exist regardless of whether a human or AI is conducting the research. And in fact, in some contexts, you could even make the argument that it's safer in the hands of an AI (e.g., I'd rather Gain of Function research be performed by robotic AI on an asteroid rather than in a lab in Wuhan run by employees who are vulnerable to human error).


We can't regulate specific things fast enough. It takes years of political infighting (this is intentional! government and democracy are supposed to move slowly so as to break things slowly) to get even partial regulation. Meanwhile every day brings another AI feature that could irreversibly bring about the end of humanity or society or democracy or ...


Cat is already out of the bag, regulation will do nothing to even slow down the inevitable pan-genocidal AI, _if_ such a thing can be created


It's obviously false. Nuclear weapon proliferation has been largely prevented, for example. Many dangerous pathogens and lots of other things are not available to the public.

Asserting inevitability is an old rhetorical technique; it's purposes are obvious. What I wonder is, why are you using it? It serves people who want this power and have something to gain, the people who control it. Why are you fighting their battle for them?


Nuclear materials have fundamental material chokepoints that make them far easier to control.

- Most countries have little to no uranium deposits and so have to be able to find a uranium-producing ally willing to play ball.

- Production of enriched fuel and R&D are both outrageously expensive, generally limiting them to state actors.

- Enrichment has massive energy requirements and requires huge facilities, tipping off observers of what you're doing

Despite all this and decades of strong anti-nuclear proliferation international agreements India, Pakistan, South Africa, Isreal, and North Korea have all developed nuclear weapons in defiance of the UN and international law.

In comparison the only real bottleneck in proliferation of AI is computing power - but the cost of running an LLM is a pittance compared to a nuclear weapons program. OpenAI has raised something like $11 billion in funding. A single new proposed US Department of Energy uranium enrichment plant is estimated to cost $10 billion just to build.

I don't believe proliferation is inevitable but it's very possible that the genie is out of the bottle. You would have to convince the entire world that the risks are large enough to to warrant putting on the brakes, and the dangers of AI are much harder to explain than the dangers of nuclear weapons. And if rival countries cannot agree on regulation then we're just going to see a new arms race.


You can’t make a nuclear weapon with an internet connection and a GPU. Rather than imply some secondary motive on my part, put a modicum of critical thinking into what makes a nuke different than an ML model.


I'd rather try and fail than give up without a fight. I'm many things but I'm not a coward.


Best of luck!


We already do; China jailed somebody for gene editing babies unethically for HIV resistance.

We can walk and chew gum at the same time, and regulate two things.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

Only because we know the risks and issues with them.

OP is talking about furthering technology, which is quite literally "discovering new things"; regulations on furthering technology (outside of literal nuclear weapons) would have to be along the lines of "you must submit your idea for approval to the US government before using it in a non-academic context if could be interpreted as industry-changing or inventing", which means anyone with ideas will just move to a country that doesn't hinder its own technological progress.


Human review boards and restrictions on various dangerous biological research exist explicitly to limit damage from furthering lines of research which might be dangerous.


Those seem to be explicitly for actual research papers and whatnot, and are largely voluntary; it’s not mandated by the government.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

ha, the big difference is that this whole list can actually affect the ultra wealthy. AI has the power to make them entirely untouchable one day, so good luck seeing any kind of regulation happen here.


I do not think the reason for nuclear weapons treaties is that they can blow up "the ultra wealthy". Is that why the USSR signed them?


you can replace ultra wealthy with powerful. same point stands. the only things that become regulated heavily are things that can affect the people that live at the top, whether its the obscenely rich, or the despots in various countries.


So everyone should have a hydrogen bomb at the lowest price the market can provide, that's your actual opinion?


i dont know what the hell you're talking about


"We have a ton of protection laws around all sorts of dangerous technology, this is a super naive take. You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better."

As technology advances, such prohibitions are going to become less and less effective.

Tech is constantly getting smaller, cheaper and easier for a random person or group of people to acquire, no matter what the laws say.

Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.

Make laws against it in one place, your competitor in another part of the world without such laws or their effective enforcement will dominate you before long.


> Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.

I wouldn't say that this is an additional reason.

I would say that this is the primary reason that overrides the reasonable concerns that people have for AI. We are human after all.


It's a baseless assertion, often repeated. Reptition isn't evidence. Is there any evidence?

There's lots of evidence of our ability to control the development, use and proliferation of technology.


Have laws stopped music piracy? Have laws stopped copyright infringement?

Both have happened at a rampant pace once the technology to easily copy music and copyrighted content became easily available and virtually free.

The same is likely to happen to every technology that becomes cheap enough to make and easy enough to use -- which is where technology as a whole is trending towards.

Laws against technology manufacture/use are only effective while the barrier to entry remains high.


> Have laws stopped music piracy? Have laws stopped copyright infringement?

They have a large effect. But regardless, I don't see the point. Evidence that X doesn't always do Y isn't evidence that X is ineffective doing Y. Seatbelts don't always save your life, but are not ineffective.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

All those examples put us in physical danger to the point of death.


Others siblings have good replies, but also, we regulate without physical danger all the damn time.

See airlines, traffic control, medical equipment, government services, but also we regulate ads, TV, financial services, crypto. I mean we regulate so many “tech” things for the benefit of society this is a losing argument to take. There’s plenty of room to argue the elsewhere but the idea that we don’t regulate tech if it’s not immediately a physical danger is crazy. Even global warming is a huge one, down to housing codes and cars etc. It’s a potential physical danger hundreds of years out, and we’re freaking out about it. Yet AI had the chance to really do much more damage within a much shorter time frame.

We also just regulate soft social stability things all over, be it nudity, noise, etc.


Let me recalibrate. I'm not arguing that there technology or AI or things that don't cause death should not be regulated, but I can see that might be the inference.

I just think that comparing AI to nuclear weapons seems like hyperbole.


Why is it hyperbole? Nuclear weapons and AI both have the capacity to end the world.


Private citizens and companies do not have access to nuclear weapon technology and even the countries who do are being watched like hawks.

If equally or similarly dangerous, are you then saying AI technology should be taken out of the hands of companies and private citizens?


For the sake of argument, let's say yes, AI should be taken out of the hands of the private sector entirely.


AI is now poised to make bureaucratic decisions. Bureaucracy puts people in physical danger every day. I've had medical treatments a doctor said I need denied by my insurance, for example.


For somebody from another country this sounds insane..


Risks to physical danger evolve all the time. It’s not a big leap from AI generated this script to a fatal bug is nefariously hidden in the AI generated library in-use by mission critical services (e.g. cars, medical devices, missiles, fertilizers).


how do you regulate something that many people can already run on their home gpu? how much software has ever been successfully banned from distribution after release?


They do like to try :(


> massive wealth transfer to the top (thing we pass laws protecting against often)

If only.


The Roman empire did that for hundreds of years! They had an economic standard that wasn't surpassed until ~1650s Europe, so why didn't they have an industrial revolution? It was because elites were very against technological developments that reduced labor costs or ruined professions, because they thought they would be destabilizing to their power.

There's a story told by Pliny in the 1st century. An inventor came up with shatter-proof glass, he was very proud, and the emperor called him up to see it. They hit it with a hammer and it didn't break! The inventor expected huge rewards - and then the emperor had him beheaded because it would disrupt the Roman glass industry and possibly devalue metals. This story is probably apocryphal but it shows Roman values very well - this story was about what a wise emperor Tiberius was! See https://en.wikipedia.org/wiki/Flexible_glass


> When in human history have we ever intentionally not furthered technological progress?

chemical and biological weapons / human cloning / export restriction / trade embargoes / nuclear rockets / phage therapy / personal nuclear power

I mean.. the list goes on forever, but my point is that humanity pretty routinely reduces research efforts in specific areas.


I don’t think any of your examples are applicable here. Work has never stopped in chemical/bio warfare. CRISPR. Restrictions and embargoes are not technologies. Nuclear rockets are an engineering constraint and a lack of market if anything. Not sure why you mention phage therapy, it’s accelerating. Personal nuclear power is a safety hazard.


Sometimes restrictions are the best way to accelerate tech progress. How much would we learn if we gave everyone nukes to tinker with? Probably something. Is the worth the odds that we might destroy the world in the process and set back all our progress? No. We do the same with bioweapons, we do the same with patents and trademarks, and laws preventing theft and murder.

If unfettered access to AI has good odds to just kill us all, we'd want to restrict it. You'd agree I'm sure, except your position is implicitly that AI isn't as dangerous as some others make it out to be. That's where you are disagreeing.


I wonder how these CEOs view the world, they are pushing on a product which is gonna kill every single tech derivative in it's own industry. Microsoft, Google, AWS, Vercel, Replit, they all feed back from selling the products their devs design, to other devs or companies. They will be poping the bubble

Now, if 80-90% of devs and startups are gonna be wiped in this context, the same applies to those one in the middle, accountants, data analysts, business analysts, lawyers. Now they can eat the entire cake without sharing it with the human beings who contributed over the years.

I can see the regulations coming, if the layoffs start happening fast enough and households income start to deteriorate. Why? probably because this time is gonna impact every single human being you know, and it is better to keep people employed and with a purpose in life than having to tax the shit out of these companies in order to give back the margin of profit that had some mechanism of incentives and effort in the first place.


> If 80-90% of devs and startups are gonna be wiped in this context

This is not a very charitable assessment of the adaptability of devs and startups, nevermind that of humans in general. We've been adapting to technological change for centuries. What reason do you have to believe this time will be any different?


Humans can adapt just fine. Capitalism however not. What do you think happens if AI keeps improving at this speed and within a few years millions to tens of millions of people are out of a job?


> When in human history have we ever intentionally not furthered technological progress?

Oh, a number. Medicine is the biggest field - human trials have to follow ethics these days:

- the times of Mengele-style "experiments" on inmates or the infamous Tuskeegee syphilis study are long past

- we can clone sheep for like what, 2 decades now, but IIRC haven't even begun chimpanzees, much less humans

- same for gene editing (especially in germlines), which is barely beginning in human despite being common standard for lab rats and mice. Anything impacting the germ line... I'm not sure if this will become anywhere close to acceptable in my life time.

- pre-implantation genetic based discarding of embryos is still widely (and for good reason...) seen as unethical

Another big area is, ironically given that militaries usually want ever deadlier toys, the military:

- a lot of European armies and, from the Cold War era on mostly Russia and America, have developed a shit ton of biological and chemical weapons of war. Development on that has slowed to a crawl and so did usage, at least until Assad dropped that shit on his own population in Syria, and Russia occasionally likes to murder dissidents.

- nuclear weapons have been rarely tested for decades now, with the exception of North Korea, despite there being obvious potential for improvement or civilian use (e.g. in putting out oil well fires).

Humanity, at least sometimes, seems to be able to keep itself in check, but only if the potential of suffering is just too extreme.


> Software wants to be free.

I feel like I'm in a time warp and we're back in 1993 or so on /. Software doesn't want anything and the people claim that technological progress is always good dream themselves to be the beneficiaries of that progress regardless of the effects on others, even if those are negative.

As for the intentional limits on technological progress: there are so many examples of this that I wonder why you would claim that we haven't done that in the past.


I was one year old in 1993, so I'll defer to you on the meaning of this expression [0], but it sounds like you were on the opposite side of its ideological argument. How did that work out for you? Thirty years later, I'm not sure it's a position I'd want to brag about taking, considering the tremendous success and net positive impact of the Internet (despite its many flaws). Although, based on this Wikipedia article, I can see how it's a sort of Rorschach test that naive libertarians and optimistic statists could each interpret favorably according to their own bias.

[0] https://en.wikipedia.org/wiki/Information_wants_to_be_free


You're making a lot of assumptions.

You're also kind of insulting without having any grounds whatsoever to do so.

I suggest you read the guidelines for a bit.


Eh? I wasn't trying to be, and I was genuinely curious to read your reply to this. Oh well, sorry about that I guess.


Your comment is a complete strawman and you then attach all kinds of attributes to me that do not apply.


It sounded like you were arguing against "software wants to be free," or at least that you were exasperated with the argument, so I was wondering how you reconciled that with the fact that the Internet appears to have been a resounding success, and those advocating "software wants to be free" turned out to be mostly correct.


> When in human history have we ever intentionally not furthered technological progress?

Every time an IRB, ERB, IEC, or REB says no. Do you want an exact date and time? I'm sure it happens multiple times a day even.


> Do you want an exact date and time? I'm sure it happens multiple times a day even.

You should read "when in human history" in larger time scales than minutes, hours, and days. Furthermore, you should read it not as binary (no progress or all progress), but the general arc is technological progression.


What are you talking about? IRBs have been around for 50 years. So 50 years of history we have been consciously not pursuing certain knowledge because of ethics.

It would really help for you to just say what timescale you're setting as your standard. I'm getting real, "My cutoff is actually 51 years"-energy.

Just accept that we have, as a society, decided not to pursue some knowledge because of the ethics. It's pretty simple.


Some cultures like the Amish said were stopping here.


The Amish are dependent on a technological powerhouse that is the US to survive.

They are pacifists themselves, but they are grateful that the US allows them their way of life, they'll be extinct a long time ago if they arrived in China/Middle East/Russia etc.

That's why the Amish are not interested in advertising their techno-primitivism. It works incredibly well for them, they raise giant happy families isolated from drugs, family breakdown, and every other modern ill, while benefiting from modern medicine, the purchasing power of their non-amish customers. However, they know that making the entire US live like them will be quite a disaster.

Note the Amish are not immune from economics forced changes either. Young amish don't farm anymore, if every family quadruples in population, there's no 4x the land to go around. So they go into construction (employers love a bunch of strong,non-drugged,non-criminal workers), which is again intensely dependent on the outside economy, but pays way better.

As a general society, the US is not allowed to slow down technological development. If not for the US, Ukraine would have already been overran, and European peace shattered. If not for the US, the war in Taiwan would have already ended, and Japan/Australia/South Korea all under Chinese thrall. There's also other more certain civilization ending events on the horizon, like resource exhaustation and climate change. AI's threats are way easier to manage than coordinating 7 billion people to selflessly sacrifice.


>they'd be extinct a long time ago if they arrived in China/Middle East/Russia etc.

There is actually a group similar to the Amish in Russia, it's called the Old Believers. They formed after a schism within the Orthodox church and fled persecution to Siberia. Unlike the Amish, many of the Old Believers aren't really integrated with the modern world as they still live where their ancestors settled in. So groups that refuse to technologically progress do exist, and can do so even under persecution and changing economic regimes.


That's a good point and an interesting example, but it's also irrelevant to the question of human history, unless you want to somehow impose a monoculture on the entire population of planet Earth, which seems difficult to achieve without some sort of unitary authoritarian world government.


> unless you want to somehow impose a monoculture on the entire population of planet Earth

Impose? No. Monoculture? No. Encourage greater consideration, yes. And we do that by being open about why we might choose to not do something, and also by being ready for other people that we cannot control who make a different choice.


Does human history applies to true Scotsmen as well?


Apparently the Amish aren't human.


while Amish are most certainly human their existence rests on the fact that they happen to be surrounded by the mean old United States. Any moderate historical predator would otherwise make short work of them, they're a fundamentally uncompetitive civilization.

This goes for all utopian model communities, Kibbutzim, etc, they exist by virtue of their host society's protection. And as such the OP is right that they have no impact on the course of history, because they have no autonomy.


I have been saying that we will all be Amish eventually as we are forced to decide what technologies to allow into our communities. Communities which do not will go away (e.g., VR porn and sex dolls will further decrease birth rates; religions/communities that forbid it will be more fertile)


That's not required. The Amish have about a 10% defection rate. Their community deliberately allows young people to experience the outside world when they reach adulthood, and choose to return or to leave permanently.

This has two effects. 1. People who stay, actually want to stay. Massively improving the stability of the community. 2. The outside communities receive a fresh infusion of population, that's already well integrated into the society, rather than refugees coming from 10000 miles away.

Essentially, rural america will eventually be different shades of Amish (in about 100 years). The amish population will overflow from the farms, and flow into the cities, replenishing the population of the more productive cities (Which are not population-self-sustaining).

This is a sustainable arrangement, and eliminates the need of mass-immigration and demographic destabilisation. This is also in-line with historical patterns, cities have always had negative natural population growth (disease/higher real estate costs). Cities basically grind population into money, so they need rural areas to replenish the population.


"People who stay, actually want to stay."

That depends on how you define "want".

Amish are ostracized by their family and community if they leave. That's some massive coercion right there: either stay or lose your connection to the people you're closest to and everything you've ever known and raised to believe your whole life.

Not much of a choice, though some exceptionally independent people do manage to make that sacrifice.


> This is also in-line with historical patterns, cities have always had negative natural population growth (disease/higher real estate costs).

I had not heard this before. Do you have citations for this?

(I realize cities have lower birth rate than rural areas in many cases. I am interested in the assertion that they are negative. Has it always been so? Or have cities and rural areas declined at same rate?)


I think a synthetic womb/cloning would counter the fertility decline among more advanced civilization


Birth is not the limiter, childrearing is. Synthetic wombs are more expensive than just having surrogate mothers. For the same reason that synthetic food is more expensive than bread and cabbage.

The actual counter to fertility decline, may be AI teachers. AI will radically close the education gap between rich and poor, and lower the costs. All you need is a physical human to supervise the kid, the AI will do the rest, from entertainment, to education, to identifying when the child is hungry/sleepy/potty, and relaying that info for the human to act on.


This is what ought to happen. The question is what will happen?


Sine qua non ad astra


Everybody decides what technologies to use all the time. Condoms exist already, but not everybody uses them always.


It does not take perfect compliance to result in drastically different birth rates in different cultures/communities.


> When in human history have we ever intentionally not furthered technological progress?

Nuclear weapons?


You get diminishing returns as they get larger though. And there has certainly been plenty of work done on delivery systems, which could be considered progress in the field.


Japan banned guns until 1800, they had them since 16xx something. The truth is we can not even ban technology. It does not work. Humanity as a whole does not exist. Political coherence as a whole does not exist. Wave aside the fig leave that is the UN and you can see the anarchic tribal squable of the species tribes.

And even those tribes are not crisis stable. Bad times and it all becomes a anarchic mess. And that is were we are headed. A future were a chaotic humanity falls apart with a multi-crisis around it, while still wielding the tools of a pre crisis era. Nuclear powerplants and nukes. AIdrones wielded by ISIS.

What if a unstoppable force (exponential progress) hits a unmoveable object(humanitys retardations).. stay along for the ride.

<Choir of engineers appears to sing dangerous technologies praises>


I look around me and see a wealthy society that has said no to a lot of technological progress - but not all. These are people that work together to build as a community to build and develop their society. They look at technology and ask if will be beneficial to the community and help preserve it - not fragment it.

I am currently on the outskirts of Amish country.

BTW when they come together to raise a barn it is called a frolic. I think we can learn a thing or two from them. And they certainly illustrate that alternatives are possible.


I get that, and I agree there is a lot to admire in such a culture, but how is it mutually exclusive with allowing progress in the rest of society? If you want to drop out and join the Amish, that's your prerogative. And in fact, the optimistic viewpoint of AGI is that it will make it even easier for you to do that, because there will be less work required from humans to sustain the minimum viable society, so in this (admittedly, possibly naive utopia) you'll only need to work insofar as you want to. I generally subscribe to this optimistic take, and I think instead of pushing for erecting barriers to progress in AI research, we should be pushing for increased safety nets in the form of systems like Basic Income for the people who might lose their jobs (which, if they had a choice, they probably wouldn't want to work anyway!)


Technological progress and societal progress are two different things. Developing lethal chemical weapons is not societal progress. Developing polarizing social media algorithms is not societal progress. If we poured $500B and years of the brightest minds into studying theoretical physics and developed a simple formula that anyone can follow for mixing ketchup and whiskey in such that it causes the atoms of all organic life in the solar system to disintegrate into subatomic particles it would be a tremendous and unmatched technological achievement, but it would very much not be societal progress.

The pessimistic view of AGI deems spontaneous disintegration into beta particles a less dramatic event than the event of AGI. When you're climbing a dark uncharted cave you take the pessimistic attitude when pondering if the next step will hold your weight, because if you hold the optimistic attitude you will surely die.

This is much more dangerous than caves. We have mapped many caves. We have never mapped an AGI.


>Software wants to be free.

And here I always thought, people want to be free.



How about when sidewalk labs tried to buy several acres of downtown Toronto to "build a city from the internet up", and local resident groups said "fuck you find another guinea pig"?


This is the reality ..

> When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition ..


>> When in human history have we ever intentionally not furthered technological progress?

We almost did with genetically engineering humans. Almost.


automation mostly and directly benefits owners/investors, not workers or common folk. you can look at productivity vs wage growth to see it plainly. productivity has risen sharply since the industrial revolution with only comparatively meagre gains on wages. and the gap between the two is widening.


That's weird, I didn't have to lug buckets of water from the well today, nor did I need to feed my horses or stock up on whale oil and parchment so I could write a letter after the sun went down.


some things got better. did you notice i talked about a gap, not an absolute. so you are just saying you are satisfied with you got out of the deal. well, ok - some call that being a sucker. or you think that owner-investors are the only way workers can organize to get things done for society rather than the work itself.


Among other things that's because we measure productivity by counting modern computers as 10000000000 1970s computers. Automation increases employment and is almost universally good for workers.


No it’s not


The luddites during the Industrial Revolution in England.

Termed the phrase “the Luddite fallacy” the thinking that innovation would have lasting harmful effects on employment.

https://en.wikipedia.org/wiki/Luddite


But the Luddites didn't… care about that? Like, at all? It wasn't employment they wanted, but wealth: the Industrial Revolution took people with a comfortable and sustainable lifestyle and place in society, and, through the power of smog and metal, turned them into disposable arms of the Machine, extracting the wealth generated thereby and giving it only to a scant few, who became rich enough to practically upend the existing class system.

The Luddites opposed injustice, not machines. They were “totally fine with machines”.

You might like Writings of the Luddites, edited and co-authored by Kevin Binfield.


Well it clearly had harmful effects the jobs of Luddites but yeah I guess everyone will just get jobs as prompt engineers and AI specialists, problem solved. Funny though, the point of automation should be to reduce work but when pressed positivists respond that the work will never end. So what’s the point?


Automation does reduce the workload. But the quiet part is that reducing work means jobless people. It has happened before and it will be happening again soon. Only this time it will affect white collar jobs.

"My idea of a perfect company is one guy who sits in a small room at a desk, and the only thing he's allowed to decide is what product to launch"

CEOs and board members salivate at the idea of them being the only people that get the profits from their company.

What will be of the rest of us who don't have access to capital? They only know that it's not their problem.


I dont think that will be the future. Maybe in the first year(s) but then it is a race to the bottom:

If it is that simple to create products more people can do it => cheaper the products.

A market driven by cheaper products that can also produce them easily is going into a price reduction loop until it reaches zero.

Thus I think something else wil happen with AI. Because what I described and what you describe is destroying the flow of capital which is the base of the economy.

Not sure what will happen. My bet (unfortunately) is on a really big mega corp that produces an AI that we all use.


It IS a race-to-the-bottom.

Products will be cheaper because they will be cheaper to produce thanks to automation. But less jobs mean less people to buy stuff, if it weren't for a credit-based society.

But I'm talking from my ass. I don't even know if there are less jobs than before. Everything seems to point that there are more jobs now than 50 years ago.

I'm just saying I feel like the telephone operators. They got replaced by a machine and who knows if they found other jobs.


It has not happened before and it will not happen again soon. Automation increases employment. Bad monetary policy and recessions decrease it.

Shareholders get the profits from corporations, not "CEOs and the board". Workers get wages. Nevertheless, US unemployment is very low right now and relatively low-paid workers are making more than they did in 2019.


That works until it don't.


Maybe not. Although I think future here implies progress and productivity gains. Increasing GDP has a very well established cause - effect relationship on making life on earth better. Less poverty, less crime, more happiness longer life expectancy etc, the list goes on. Now sure, all externalities are not always accounted for (especially climate and environmental factors), but I think even accounting for these, the future of humanity is a better one where technology progresses faster.


That is exactly the goal, if you're an accelerationist


I was unfamiliar with that term until you shared it. Thanks.

https://en.wikipedia.org/wiki/Accelerationism


The nice thing about setting the future as a goal is you achieve it regardless of anything you do.


The faster we build the future, the sooner we hit our KPIs, receive bonuses, go public on NASDAQ and cash our options.


The faster you build the future, the higher your KPI targets will be next quarter.


Because a conservative notion in a unstable, moving situation kills you? No sitting out the whole affair in a hut, when the situation is a mountain slide?

Which also makes a hostile AI a futile scenario. The worst AI has to do to take out the species, is lean back and do nothing. We are well under way on the way out by ourselves..


Thank you. Well said.


Definitionally, if we're in the future, we have more tools to solve the problems that exist.


This is not true. Financial, social, physical and legal barriers can be put up while knowledge and experience fades and gets lost.

We gain new tools, but at the same time we lose old ones.


> Why? Getting to "the future" isn't a goal in and of itself.

Having an edge or being ahead is, so anticipating and building the future is an advantage amongst humans but also moves civilization forward.


> Why?

Because it's the natural evolution. It has to be. It is written.


"We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings." -- Ursula K Le Guin


> Any human power can be resisted and changed by human beings

Competition, ambition?

(I love Le Guin's work, FWIW)


Now where did I put that eraser...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: