Hacker Newsnew | past | comments | ask | show | jobs | submit | sincerely's commentslogin

in my experience as a tech guy who got into fashion and then after several years went back to not caring: Sneakers are the product category with the least differentiation in value-for-money between the high end (especially designer, but also not-designer-but-still-expensive like common projects) both in terms of aesthetics and quality/durability. You're paying $300 more for a 10% better product. Jeans, outerwear, knits, boots, you can more easily justify that cost

As a tech guy who found an interest in design and ancillary fields recently, I am curious to know more. I assume leather, merino wool, cashmere do provide extra value. But other than that I have no knowledge. Eg why would 500 pants be better?

Material and cut/design.

Material is not just about quality, but rarity or uniqueness. For example, japanese denim can get very expensive in part because it's very low volume. For dress pants, it might be a particularly interesting fabric.

A lot of more expensive pants also have interesting designs or proportions that are very unique or hard to find elsewhere. There is a lot of cool stuff you can get for under $500 USD though, that is still pretty expensive.

Some examples around that price range:

- https://stoffa.co/collections/trousers/products/lavender-woo...

- https://www.lemaire.fr/products/twisted-belted-pants-bl760-d...

- https://www.blueowl.us/collections/pure-blue-japan/products/...


I have 2 pairs of pants that cost over $500. Both of them use technical fabrics (Schoeller Dryskin and Stotz EtaProof), have complex patterns (asymmetrical, articulated, etc.), lots of hardware (Riri zippers, magnetic pocket closures, Cobrax snaps), and can be ordered in custom sizing. They also have no text / logos anywhere on the pants. One pair is garment dyed as a complete unit after sewing to give a unique effect that's more interesting and has more "depth" compared to a flat, consistent color.

I don’t really disagree, but is it meaningfully different than having a conversation while drivng?

Do you look the person you're talking to in the eyes and press your fingers on their face while talking to them?

This is who you're sharing the road with, you probably pass by dozens of them every day:

https://youtube.com/watch?v=_w8Fll6a0hM

Thousands of people get maimed or killed every ear by fucktards who can't wait 10 minutes for their fucking dopamine rush.


It's useful that the person in the passenger seat also has a stake in not crashing

And is observing the same environment as the driver, likely knowing when to shut up and let the driver concentrate.

I kind of get it, but at the same time...isn't "we made a machine to do something that people used to do" basically the entire history of of technology? It feels like somehow we should have figured out how to cope with the "but what about the old jobs" problem


that is the point of Luddism! the original Luddite movement was not ipso facto opposed to progress, but rather to the societal harm caused by society-scale economic obsolescence. the entire history of technology is also powerful business interests smearing this movement as being intrinsically anti-progress, rather than directly addressing these concerns…


I think we should be careful attributing too much idealism to it. The Luddites were not a unified movement and people had much more urgent concerns than thinking about technological progress from a sociocentric perspective. Considering the time period with the Napoleonic Wars as backdrop I don't think anyone can blame them for simply being angry and wanting to smash the machines that made them lose their job.


And an important note: history is written by the victors. Additionally, just like how today some people have a caricatured understanding of the “other” side (whatever that might be), understanding what Luddites thoughts and motivations were through the lens of their victor opponents will inevitably create a biased, harsh picture of them.


>wanting to smash the machines that made them lose their job.

Wondering how long before people start setting datacenters on fire.


Hey maybe the problem isn’t the means of production (the data centers), but the mode of production.. capitalism.


And how well those attempts fare. Data centers aren't exactly fortified, but they have a lot of focus on access control and redundancy, and usually have good fire suppression (except OVH apparently).

Maybe ChatGPT has some ideas on how to best attack data centers /s


Yesterday AWS had a Little oppsie that brought down half of western businesses. I don't have confidence in that redundancy.


I find it hard to locate my sympathy button for people who smash and burn things built up by other people.


The act of destruction is not inherently evil, it is a matter of what it targets. You can burn down the Library of Alexandria or you can bust open a concentration camp. (These are just some extreme examples, some datacenter isn’t morally equal to either).


Datacenters aren't built by people, they're built by corporations.


Corporations which are entirely made up of people. Not to mention the people that physically built and maintain the data center.

Or did the actual legal fiction of a corporation do it? Maybe the articles of incorporation documents got up and did the work themselves?


It means that no one cares about the creations except in terms of money. If an Oracle building burns down and no one is hurt, I wouldn't shed a single tear. If an artistic graffiti mural adorned its wall, I would be more upset.


I get what you mean, but my point is even that Oracle building was designed, built, and maintained by the work of real people. Many of which I assume take pride in their work and may in fact care if it’s burned down.


But why should they? An Oracle data centre is built for one purpose, and one purpose only - to increase the wealth and power of Larry Ellison. Is furthering that goal really something to be proud of?

As a wiser man than me once said, do not anthropomorphise the lawnmower.


Exactly, the luddites werent especially anti technology. Smashing stocking frames for them was a tactic to drive up their wages.

Just as the fallout of the napoleonic war was used as a means of driving down their wages. The only difference is that tactic didnt get employers executed.

It's always been in the interests of capital to nudge the pitchforks away from their hides in the direction of the machines, and to always try and recharacterize anti capitalist movements as anti technology.

In 2010 I remember a particularly stupid example where Forbes declared anti Uber protestors were "anti smartphone".

Sadly most people dont seem to be smart enough to not fall for this.


I think the concern in this case is that, unlike before where machines were built for other people to use, we’re now building machines that may be able to use themselves.


Not that much of a difference tbh. If one traditional machine allows one worker to do the work of twenty in half the time, that's still a big net loss in those jobs, even if it technically creates one.

The real issue is that AI/robotics are machines that can theoretically replace any job -- at a certain point, there's nowhere for people to reskill to. The fact that it's been most disruptive in fields that have always been seen as immune to automation kind of underscores that point.


The concern is the same, people want to be taken care of by society, even if they don't have a job, for whatever reason.


In the old times, this was a "want" because the only people without work were those unqualified or unable to work. In the new times, it will be a "need" because everyone will be unemployed, and no one will be able to work competitively.


Glad to see the Luddites getting a shout out here.

This is a new / recent book about the Luddite movement and it’s similarities to the direction we are headed due to LLMs:

https://www.littlebrown.com/titles/brian-merchant/blood-in-t...

Enjoyed the book and learned a lot from it!


There’s a difference between something and everything though


Somehow modern Luddite messaging doesn't communicate that clearly either. Instead of "where's my fair share of AI benefits?" we hear "AI is evil, pls don't replace us".


Yes. The workers don't want to be replaced by machines. This is Luddism.


>pls don't replace us

Yeah, how dare they not want to lose their careers.

Losing a bunch of jobs in a short period is terrible. Losing a bunch of careers in a short period is a catastrophe.

Also, this is dishonest - nobody is confused about why people don't like AI replacing/reducing some jobs and forms of art, no matter what words they use to describe their feelings (or how you choose to paraphrase those words).


That’s false. It’s very easy to become confused about the point, when anti-AI folks in general don’t spend their time attacking companies…

What I typically see is:

- Open source programmers attacking other open source programmers, for any of half a dozen reasons. They rarely sound entirely honest.

- Artists attacking hobbyists who like to generate a couple pictures for memes, because it’s cool, or to illustrate stories. None of the hobbyists would have commissioned an artist for this purpose, even if AI didn’t exist.

- Worries about potential human extinction. That’s the one category I sympathise with.

Speaking for myself, I spent years discussing the potential economic drawbacks for once AI became useful. People generally ignored me.

The moment it started happening, they instead started attacking me for having the temerity to use it myself.

Meanwhile I’ve been instructed I need to start using AI at work. Unspoken: Or be fired. And, fair play: Our workload is only increasing, and I happen to know how to get value from the tools… because I spent years playing with them, since well before they had any.

My colleagues who are anti-AI, I suspect, won’t do so well.


They'll replace you too you know


Human extinction is not a potential it’s just a matter of time. The conditions for human life on this planet have already been eroded enough that there is no turning back. The human race is sleepwalking into nothingness - it’s fine we had a good run and had some great times in between.


I've seen enough anecdotes about business productivity lately that LLMs is not the solution to their workload struggles. You can't lay off people and expect the remainder + LLMs to replace them.


>Losing a bunch of jobs in a short period is terrible. Losing a bunch of careers in a short period is a catastrophe.

'careers' is so ambiguous as to be useless as a metric.

what kind of careers? scamming call centers? heavy petrochem production? drug smuggling? cigarette marketing?

There are plenty of career paths that the world would be better off without, let's be clear about that.


>what kind of careers? scamming call centers? heavy petrochem production? drug smuggling? cigarette marketing?

All careers. All information work, and all physical work.

Yes. It is better for someone to be a criminal than to be unemployed. They will at least have some minimal amount of leverage and power to destroy the system which creates them.

A human soldier or drug dealer or something at least has the ability to consider whether what they are doing is wrong. A robot will be totally obedient and efficient at doing whatever job it's supposed to.

I disagree totally. There are no career paths which would be better off automated. Even if you disagree with what the jobs do, automation would just make them more efficient.


I would love to lose my job if I got 50% of the value it brings the corp that replaced me.


Would we be better off today if the Luddites had prevailed?

No?

Well, what's different this time?

Oh, wait, maybe they did prevail after all. I own my means of production, even though I'm by no means a powerful, filthy-rich capitalist or industrialist. So thanks, Ned -- I guess it all worked out for the best!


The Amish seem to be doing fine — and I don’t know if their way of life is under as much existential risk of upheaval and change as everyone else’s


The Amish approach to technology is completely different from the Luddites, and it doesn't teach us anything about whether we, as a society, should accept or reject a certain technology.

To be more exact, there is no evidence that historical Luddites were ideologically opposed to machine use in the textile industry. The Luddites seemed to have been primarily concerned with wages and labor conditions, but used machine-breaking as an effective tactic. But to the extent that Luddites did oppose to machines, and the way we did come to understand the term Luddite later, this opposition was markedly different from the way Amish oppose technology.

The Luddites who did oppose the use of industrial textile production machines were opposed to other people using these machines as it hurt their own livelihood. If it was up to them, nobody would have been allowed to use these machines. Alternatively, they would be perfectly happy if their livelihood could have been protected in some other manner, because that was their primary goal, but failing that they took action depriving other people from being able to use machines to affect their livelihood.

The Amish, on the other hand, oppose a much wider breadth of technology for purely ideological reasons. But they only oppose their own use if this technology. The key point here is that the Amish live in a world where everybody around them is using the very technologies they shun, and they do not make any attempt to isolate themselves from this world. The Amish have no qualms about using modern medicines, and although they largely avoid electricity and mechanized transportation, they still make significant use of diesel engine-based machinery, especially for business purposes and they generally don't avoid chemical fertilizers or pesticides either.

So if we want to say Amish are commercially successful and their life is pretty good, we have to keep in mind that they aren't a representation of how our society would look if we've collectively banned all the technologies they've personally avoided. Without mass industrialization, there would be no modern healthcare that would eliminate child mortality and there would be no diesel engines, chemical fertilizers and pesticides that boost crop yields and allow family farm output to shoot way past subsistence level.

In the end, the only lesson that the Amish teach us is that you can selectively avoid certain kinds of technologies and carve yourself a successful niche in an wider technologically advanced community.


I somewhat reference the technicalities on Luddite vs the selective rejection of technology that the Amish represent (although arguably they are the closest we have to neo-Luddites, mentioning obviously Luddites anti-progress for all was too radical a stand, not on ideological grounds, but in its anti-capital stance).

I think the broader point I am trying to push is every critique of these technologies is not necessarily demanding their complete destruction and non-proliferation.

And the lesson of the Amish is that, in capitalist democracy, certain technologies are inevitable once the capital class demands them, and the only alternative to their proliferation and societal impact is complete isolation from the greater culture. That is a tough reality.


Im sorry but - Who do you think, precisely, seems to be doing ‘fine’ among the Amish?

White cishet men?

I cannot imagine what a hell my life might have been like if I were born into an Amish community, the abuse I would have suffered, the escape I would had to make just to get to a point in my life where I could be me without fear of reprisal.

God just think about realizing that your choices are either: die, conform, or a complete exodus from your family and friends and everything you’ve ever known?

“The Amish seem to be doing just fine” come on


I was not super precise in my remark, so I think it suffered from being misconstrued as written. My remark was strictly in the context of the Parent posts remark on Luddites prevailing or not.

In the context of Luddite societies or communities of faith, the Amish have been able to continue to persist for roughly three centuries with Luddite-like way of life as their foundation. In fact, they are not strictly Luddite in the technical sense, but intentional about what technologies are adopted with a community-focused mindset driving all decisions. This is what I meant be "fine" - as in, culture is not always a winner-take-all market. The amish have persisted, and I don't doubt they will continue to persist - and I envision a great eye will be turned to their ways as they continue protected from some of the anti-human technologies we are wrestling with in greater society.

All of this is to say, we have concrete anthropological examples we can study. I do not doubt that in the coming years and decades we will see a surge of neo-Luddite religious movements (and their techno-accelerationist counterparts) that, perhaps three centuries from now, will be looked back upon in the same context as we do the Amish today.

As an aside, if we place pro-technological development philosophy under the religious umbrella of Capitalism, I think your same critiques apply for many of the prior centuries as well. Specifically with regards to the primary benefactors being cis white men. Additionally, I do not think the racial angle is a fair critique of the Amish, which is a religious ethno-racial group in a similar vein of the Jewish community.


We invent machines to free ourselves from labour, yet we’ve built an economy where freedom from labour means losing your livelihood.


> We invent machines to free ourselves from labour

That's a very romantic view.

The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.


I would hope that people realize that money in itself is merely digits on a computer and that the real power of this stuff belongs to the people since the AI inherited and learned from us.

I know that's a simplification but we uphold this contract that controls us. The people get to decide how this plays out and as much as I'm hopeful we excel into a world that is more like star trek, that skips over the ugly transition that could succeed or fail to get us there.

But we aren't that far off of a replicator if our AI models become so advanced in an atomic compute world they can rearrange atoms into new forms. It seemed fiction before but within reach of humanity should we not destroy ourselves.


Our moral and political development severely lags our technological development. I have very little confidence that it will ever catch up. Looking back over the post-WW2 era, we have seen improvements (civil rights, recognition of past injustices, expansion of medical care in many countries) but also serious systemic regressions (failure to take climate change seriously, retreat to parochial revenge-based politics, failure to adequately fund society's needs, capture of politics and law by elites).

My main concern about AI is not any kind of extinction scenario but just the basic fact that we are not prepared to address the likely externalities that result from it because we're just historically terrible at addressing externalities.


Average hours worked is more or less monotonically decreasing since the start of the industrial revolution, so in the long run we are slowly freeing ourselves. But in the short run, people keep working because a) machines usually are complementary to labour (there are still coal miners today, they are just way more productive) and b) even if some jobs are completely eliminated by machines (ice making, for example), that only "solves" that narrow field. The ice farmers can (and did) reenter the labour market and find something else to do.


> Average hours worked is more or less monotonically decreasing since the start of the industrial revolution

Although that is true when comparing the start of the Industrial revolution and now, people worked less hours before the Industrial revolution [1]. Comparing the hours of work per year in England between the 17th century and the 19th century, there has been an increase of 80%. Most interestingly, the real average weekly wages over the same time period have slightly decreased, while the GDP has increased by 50%.

1. https://www.youtube.com/watch?v=hvk_XylEmLo


No, on average people in 1600s England (who were overwhelmingly peasants) worked almost all daylight hours, 6 days a week - perhaps 3000 hours a year. It's simply not possible for the hours worked to have increased a further 80% from that baseline.

Also most labour was not wage labour in the 17th century, so you need to be careful looking at wages. Especially comparing the the 19th century since there was a vast expansion of wage labour.


Are average hours worked decreasing because we have more abundance and less need to work, or are they decreasing because the distribution of work is changing?

I find it hard to accept your claim because at the start of the industrial revolution there were far fewer women in the formal labor market than there are today.


Well there were also barely any men in the formal labour market. Most people were peasants working their family farm + sharecropping on estates of the landed gentry. But that doesn't mean they weren't working hard - both sexes worked well over 3000 hours per year, to barely scrape by.


No other such economy has ever existed. "He who does not work, neither shall he eat"


Because we invent machines not to free ourselves from labor (inventing machines is a huge amount of labor by itself), but to overcome the greed of the workers.


„We“? A few billionaires do. They won‘t free themselves from labour, they will „free“ you from it. Involuntarily.


If ML is limited to replacing some tasks that humans do, yes it will be much like any past technological innovation.

If we build AGI, we don't have a past comparison for that. Technologies so far have always replaced a subset of what humans currently do, not everything at once.


I love SF, but somehow I don't find it very good foundation for predicting the future. Especially when people focus of one, very narrow theme of SF and claim with certainty that's what's gonna happen.


> I love SF, but somehow I don't find it very good foundation for predicting the future.

Nineteen Eighty-Four would like to have a word with you!

Out of all SF, I would probably want to live in The Culture (Iain M. Banks). In these books, people basically focus on their interests as all their needs are met by The Minds. The Minds (basically AIs) find humans infinity fascinating - I assume because they were designed that way.


Nineteen Eighty-Four captures the political zeitgeist of when it was written (~1949) and the following years better than it captures that of 1984. This was the era of the cold war and McCarthyism. Describing the present often seems surprisingly prescient to later generations.


Props on the very appropriate username, drone.


I mean, yes. The invention of AI that replaces virtually all workers would certainly pose a serious challenge to society. But that's nothing compared to what would happen if Jesus descended from the sky and turned off gravity for the entire planet.


Heh,I read SF as San Francisco; point remains true. Except the Valley wants to force a future, not describe it


AGI does not replace "everything". It might replace most of the work that someone can do behind a desk, but there are a lot of jobs that involve going out there and working with reality outside of the computer.


AGI as defined these days is typically “can perform at competent human level on all knowledge work tasks” so somewhat tautologically it does threaten to substitute for all these jobs.

It’s a good thing to keep in mind that plumbers are a thing, my personal take is if you automated all the knowledge work then physical/robot automation would swiftly follow for the blue-collar jobs: robots are software-limited right now, and as Baumol’s Cost Disease sets in, physical labor would become more expensive so there would be increased incentive to solve the remaining hardware limitations.


I don't think that robots are software-limited in all domains... And even if AI became a superhuman software dev, it wouldn't drive the cost of software development to zero.


Humanoid robotics (ie replacing the majority of workers) is highly software-limited right now.

Here’s a napkin-sketch proof: for many decades we have had hardware that is capable of dextrously automating specific tasks (eg car manufacture) but the limitation is the control loop; you have to hire a specialist to write g-code or whatever, it’s difficult to adapt to hardware variance (slop, wear, etc) let alone adjust the task to new requirements.

If you look at the current “robot butler” hardware startups they are working on: 1) making hardware affordable, 2) inventing the required software.

Nothing in my post suggested costs go to zero. In the AGI scenario you assume software costs halve every N years, which means more software is written, and timelines for valuable projects get dramatically compressed.


Also, presumably if you have AGI you can have it address a physical problem at a higher level of abstraction. "Design a device to make any water heater installable by a single person in 20 minutes" would result in a complex system that would make a lot of blue collar labor redundant (the last water heater I had installed took 3 guys over an hour to complete).

It would not even necessarily result in a human-like robot - just some device that can move the water heater around and assist with the process of disconnecting the old one and installing the new one.


There will most likely be period where robotics lags AGI, but how long will that really last?

Especially with essentially unlimited AGI robotics engineers to work on the problem?


There are already plenty of supply chain problems in the AI industry, but the supply chain limitations to robotics are even higher. You can't snap your fingers and increase robotics production tenfold or a hundredfold without a lot of supply chain improvements that will take a long time. I'd say anywhere between twenty and fifty years.


"Everything" might be hyperbole but a huge percentage of the workforce in my country is office/desk based. Included in that % is a lot of the middleclass and stepping stone jobs to get out of manual work.

If AI kills the middle and transitional roles i anticipate anarchy.


I haven’t heard a good argument why this isn’t the most likely path.


It isn't really general intelligence if it doesn't compete, or outcompete us, at a large enough portion of knowledge work jobs that we are forced out of the labor market.

Also don't forget that plenty of knowledge work is focused on automating manual labor. If AGI is a thing, it will eventually be used to also outcompete us on physical work too.

People like to point to plumbers as an example of a safe(r) job, and it is. But automating plumbing tasks is most difficult because the entire industry is designed to be installed by humans. Without that constraint it would likely be much easier to design plumbing systems and robots to install and maintain them more efficiently than what we have today with human-optimized plumbing.


You don't think AGI will be able to figure out a way to give itself a physical form capable of doing all those jobs?


I think about all the actual physical work that goes into building a functioning supply chain, and that it will take a lot more than "figuring out" to manifest such a physical form.


>we should have figured out

You would think! But it's not the type of problem Americans seem to care about. If we could address it collectively, then we wouldn't have these talking-past-each-other clashes where the harmed masses get told they're somehow idiots for caring more about keeping the life and relative happiness they worked to earn for their families than achieving the maximum adoption rate of some new thing that's good for society long term, but only really helps the executives short term. There's a line where disruption becomes misery, and most people in the clear don't appreciate how near the line is to the status quo.


I always compare it to the age of the industrial revolution. I have no doubt you had stubborn old people saying: "Why would I need a machine to do what I can do just fine by hand??" Those people quickly found themselves at a disadvantage to those who choose not to fight change, but to embrace it and harness technological leaps to improve their productivity and output.


Most people are not in a position to choose whether to embrace or reject. An individual is generally in a position to be harmed by or helped by the new thing, based on their role and the time they are alive.

Analogies are almost always an excuse to oversimplify. Just defend the thing on its own properties - not the properties of a conceptually similar thing that happened in the past.


Thank you for saying this.


The difference is that in the industrial revolution there was a migration from hard physical labor to cushy information work.

Now that information work is being automated, there will be nothing left!

This "embrace or die" strategy obviously doesn't work on a societal scale, it is an individual strategy.


> in the industrial revolution there was a migration from hard physical labor to cushy information work.

The industrial revolution started in the early 1800's. It was a migration from hard physical labor outdoors, around the home and in small workshops to hard physical labor in factories.


Most people are not doing "information" work. They provide interpersonal services, such as health/aged/childcare or retail/hospitality/leisure.

Techies are angsty because they are the small minority who will be disrupted. But let's not pretend most of the economy is even amenable to this technology.


> Techies are angsty because they are the small minority who will be disrupted. But let's not pretend most of the economy is even amenable to this technology.

Think of all the jobs that do not involve putting your hands on something that is crucial to the delivery of a service (a keyboard, phone, money, etc does not count). All of those jobs are amenable to this technology. It is probably at least 30% of the economy in a first pass, if not more.


Yes and thanks to this we're working more and more because most of the profit goes to the top as the inequality is rising. At some point it will not be possible to put up with this.


> AI can do anything a human can do - but better, faster and much, much cheaper.

Should be pretty clear that this is a different proposition to the historical trend of 2% GDP growth.

Mass unemployment is pretty hard for society to cope with, and understandably causes a lot of angst.


And that comes down to the moral and social contract we have and the power we give to digital money and who owns it.

We either let the peoples creativity and knowledge be controlled and owned by a select few OR we ensure all people benefit from humanities creativity and own it. And the fruits that it bears advance all of humanity. Where their are safety nets in place to ensure we are not enslaved by it but elevated to advance it.


History is full of technology doing things that go beyond human possibility as well. Think of microscopes, guns, space shuttles. There has been technology that explicitly replaces human labor but that is not at all the whole story.


You eventually run out of jobs.

Every time we progress with new tech and eliminate jobs, the new jobs are more complicated. Eventually people can't do them because they're not smart enough or precise enough or unique enough.

Each little step, we leave people behind. Usually we don't care much. Sure some people are destined to a life of poverty, but at least most people aren't.

Eventually though even the best of the humans can't keep up, and there's just nothing left.


Every time it happens it's a bit different, and it was a different generation. We will figure it out. It will be fine in the end, even if things aren't fine along the way.

I'm starting to come around to the idea that electricity was the most fundamental force that drove WW1 and WW2. We point to many other more political, social and economic reasonings, but whenever I do a kind of 5-whys on those reasons I keep coming back to electricity.

AI is kind of like electricity.

Were also at the end of a big economic/money cycle (Petro dollar, gold standard, off gold standard, maxing out leverage).

The other side will probably involve a new foundation for money. It might involve blockchain, but maybe not, I have no idea.

We don't need post-scarcity so much as we just need to rebalance everything and an upgraded system that maintains that balance for another cycle. I don't know what that system is or needs, but I suspect it will become more clear over the next 10-20 years. While many things will reach abundance (many already have) some won't, and we will need some way to deal with that. Ignoring it won't help.


Replacing dirty, dangerous jobs, and allowing people to upskill and work better jobs is one thing.

Firing educated workers en mass for software that isn’t as good but cheaper, doesn’t have the same benefits to society at large.

What is the goal of replacing humans with robots? More money for the ownership class, or freeing workers from terrible jobs so they can contribute to society in a greater way?


> doesn’t have the same benefits to society at large.

The benefits to society will be larger. Just think about it: when you replace a dirty dangerous jobs, the workers simply have nowhere to go, and they begin to generate losses for society in one form or another. Because initially, they took this dirty, dangerous jobs because they had no choice.

But when you firing educated workers en mass, society not only receives from software all the benefits that it received from workers, but all other fields are also starting to develop because these educated workers are taking on other jobs, jobs that have never been filled by educated workers before. Jobs that are understaffed because they are too dirty or too dangerous.

This will be a huge boost even for areas not directly affected by AI.


> these educated workers are taking on other jobs [...] that are understaffed because they are too dirty or too dangerous.

Just so we're clear here, are you personally going to be happy when you're forced to leave your desk to eke out a living doing something dirty and/or dangerous?


Of course not. But I'm also pretty unhappy that Supernaut doesn't send me a third of their salary. But what does this have to do with the question?


I don’t think you are considering the negative consequences.

When you fire massive amounts of educated works to replace them with AI you make a mess of the economy and all those workers are in a worse situation.

Farming got more productive and farmers became factory workers, and then factory workers became office workers.

The people replaced by AI don’t have a similar path.


> make a mess of the economy

You're not taking something into account. The economy is becoming stronger, more productive, and more efficient because of this. The brain drain from all other fields to the few highest-paying ones is decreasing.

> The people replaced by AI don’t have a similar path.

They have a better path: get a real job that will bring real benefit to society. Stop being parasites and start doing productive work. Yes, goods and services don't fall from the sky, and to create them, you have to get your hands dirty.


> to create them, you have to get your hands dirty.

But we're talking about a world where they're building robots to do this kind of work. When AI takes over the white collar office jobs, and robotic automation takes the manual "creating" labor, what'll be left for humans to do?


> what'll be left for humans to do?

There is an infinite amount of labor.


> isn't "we made a machine to do something that people used to do" basically the entire history of of technology?

kinda, I guess. but what has everyone on edge these days is humans always used technology to build things. to build civilization and infrastructure so that life was progressing in some way. at least in the US, people stopped building and advancing civilization decades ago. most sewage and transportation infrastructure is from 70+ years ago. decades ago, telecom infrastructure boomed for a bit then abruptly halted. so the "joke" is that technology these days is in no way "for the benefit of all" like it typically was for all human history (with obvious exceptions)


"we made a machine to do everything so nobody does anything" is a lot different though


isn't "we made a machine to do something that people used to do" basically the entire history of of technology?

yes, until we reached art and thinking part. Big part of the problem might be that we reached that part first before the chores with AI.


Hasn't every such technological development been accompanied by opponents of its implementation?

At least now, things aren't so bad, and today's Luddites aren't trashing offices of ai-companies and hanging their employees and executives on nearby poles and trees.


The vast majority of the movement was peaceful. There is one verified instance where a mill owner was killed and it was condemned by leaders of the movement. It was not a violent movement at its core.

Second, the movement was certainly attacked first. It was mill owners who petitioned the government to use “all force necessary” against the luddites and the government acting on behalf of them killed and maimed people who engaged in peaceful demonstrations before anyone associated with the Luddite movement reacted violently, and again, even in the face of violence the Luddite movement was at its core non violent.


they haven't started... yet

billions of unemployed people aren't going to just sit in poverty and watch as Sam Altman and Elon become multi-trillionaires

(why do you think they are building the bunkers?)


Can you imagine? Ha ha. Wow that would be crazy. Damn. I’m imagining it right now! Honestly it’s hard to stop imagining.


> isn't "we made a machine to do something that people used to do" basically the entire history of of technology?

I know, right? Machines have been gradually replacing humans for centuries. Will we actually get to the point where there are not enough jobs left? It doesn't seem like we're currently anywhere close to the point of not having any jobs available.

Has anyone thought about how the Federal Reserve plays a role with this? Automation puts downward pressure on inflation, because it doesn't cost as much to make stuff. The Federal Reserve will heavily incentivize job creation if inflation is low enough and there aren't enough jobs available, right?


We're already here. Most jobs are fake.


It seems like the _quality_ of the jobs (or median job) may have gone down, but the _quantity_ of jobs relative to population has remained roughly steady, right?


I don't mean the quality is bad, it's just that most jobs in the first world seem to be redundant or abstracted from the keys of power.

I think David Graeber wrote a book about it. Here is a guy talking about it:

https://www.youtube.com/watch?v=9lDTdLQnSQo


I feel like technology should exist to enhance the human experience, not eliminate the human experience?


Yes.


Not really, because this time it's not machine to do something that people used to do, but a machine to do anything and everything that people used to do.


Enjoy eating a bowl of pasta ?


> we should have figured out how to cope with the "but what about the old jobs" problem

We did figure that out. The ingenious cope we came up with is to entirely ignore said problem.


Manual labor was replaced with factory labor, factory labor replaced with knowledge work. If knowledge work is replaced with AI, what do we go to then? Not to mention that the efficiency gains of the modern tech industry are not even remotely distributed fairly. The logical extreme conclusion of an AI company would be where the CEO, Founder, 100% owner, and sole employee coordinates some underling AIs to run the entire company for him while he collects the entire profit and shares it with no one, because American government is an oligarchy


> what do we go to then?

You’ll waste away for a little while in some sort of slum and then eventually you’ll head to the Soylent green factory, but not for a job. After that problem solved!


We’re working on all-purpose human replacements.

Imagine if the tractor made most farm workers unnecessary but when they flocked to the cities to do factory work, the tractor was already sitting there on the assembly line doing that job too.

I don’t doubt we can come up with new jobs, but the list of jobs AGI and robotics will never be able to do is really limited to ones where the value intrinsically comes from the person doing it being a human. It’s a short list tbh.


> I kind of get it, but at the same time...isn't "we made a machine to do something that people used to do" basically the entire history of of technology?

this is not about machines. machines are built for a purpose. who is "building" them for what "purpose" ?

if you look at every actual real world human referenced in this website, they all have something in common. which is that they're billionaires.

this is a website about billionaires and their personal agendas.


The idea is that there will be newer jobs that come up.

The issue is that there will be no one earning money except the owners of OpenAI.

Take outsourcing - the issue in developed nations was underemployment and the hollowing out of industrial centers. You went from factory foreman to burger flipper. However, it did uplift millions out of poverty in other nations. So net-net, we employed far more and distributed wealth.

With Automation, we simply employ fewer people, and the benefits accrue to smaller groups.

And above all - these tools were built, essentially by mass plagiarism. They train even now, on the random stuff we write on HN and Reddit.

TLDR: its not the automation, its the wealth concentration.


The problem isn't the failure of the mathematicians and engineers who succeeded at the task of automating humanities mundane tasks in life.

It's that the people failed to elect and wield a government that ensures all humanity benefits from it and not a select few who control it all.

And I think it will become clear that the governments that are investing in it to benefit their people who have ownership versus the ones who invest in it to benefit just a handful of the rich are the ones who will keep society stable while this happens.

The other path we are going down is you will have mass unrest, move into a police state to control the resistance like America is doing now, and be exactly what Peter Thiel, Elon Musk, and Larry Ellison want with AI driven surveillance and Orwellian dystopian vision forcing people to comply or be cut out of existence deactivating their Digital IDs.


The article is addressed to journalists, who have not only the necessary skills but also a professional obligation to provide truthful information


>Rather than becoming defensive, Masad and his team owned the problem. In fact, says Masad, within two days, they rolled out an automatic safety system that separates a user’s “practice” database from their “real” one. The way Masad describes it, it’s a little like having two versions of a website’s filing cabinet — the AI agent can experiment freely in a development database, but the production database, which is the real thing that users interact with, is completely walled off.

I gotta wonder who the median techcrunch reader is if the writer/editor felt it necessary to explain the point of having a staging and prod environment, and with such a pointless analogy. We surely cannot understand what a database is unless we're told it's like a filing cabinet, right?


They are merely quoting the analogy from Masad — for whom it makes sense if they are targeting nontechnical users and not professional developers anymore.


To be fair, this was an indirect quote from a founder trying to make programming accessible to "white-collar employees with no technical background".

The bigger question here is why prod/staging wasn't an obvious design choice in the first place!


Maybe it diverted too many resources from the original goal. They fixed the issue afterwards and made a fuss about it.


Ironically, especially when you combine it with the em-dash, it really sounds like exactly the type of completely pointless and unilluminating analogy that LLMs love to generate. These analogies are essentially a bridge between two concepts, much like how a physical bridge connects two pieces of land separated by water, except in this case the 'water' is understanding and the bridge doesn't actually help you cross it.


Well done


It's kind of a beautiful turn of phrase, in that the filing cabinet is entirely superfluous, you can use almost any noun. "it’s a little like having two versions of a website’s sub sandwich — the AI agent can experiment freely with a development sandwich, but the production sandwich, which is the real thing that users interact with, is completely walled off".

"When you click a button on our website, a request is sent across the internet to our servers, it's a little like if a sockeye salmon was sent across the internet to our servers."


> median techcrunch reader

Is probably a consumer tech enthusiast and not a software developer.


> Masad and his team owned the problem. he fired half of his team so it really wasn't an option for rest of them team.


You don't write for your median reader, you write for the vast majority of your readers.

That's a basic concept of writing. Journalism should be accessible, so even if you know what a database is and how to deploy it in different envs, you shouldn't write assuming that. If a large portion of your readers don't know what you're saying, you've failed as a writer. If your readership includes high school students, you write with that as the baseline.

Richard Feynman certainly didn't write as if he assumed the reader knew particle physics. Be like Richard Feynman.


I'm not sure if any of my coworkers has ever properly used a filing cabinet


Spreadsheet would have been the better analogy.


Which wouldn't be an analogy, because spreadsheet programs can be considered and often are a database.


Richard Feynman didn’t use poor analogies.


Chuck Norris doesn't even NEED analogies. He explains the original problem so hard that you understand it without reference to a similar but more familiar situation.

Chuck Norris would probably have mentioned "dev" and "production" and never needed to discuss furniture used for stacking open-faced envelopes for holding papers.


Chuck Norris doesn’t use AI, AI uses Chuck Norris.


'Poor' is subjective. Some might even use it to describe your comment.


> Be like Richard Feynman

Oh the things he did to filing cabinets, especially "secure" ones...


If the median has half the users over it and half under it, wouldn’t writing for most of your readers be very close to writing for the median? If we are aiming for 51% (most readers). Most readers is somewhere between 50% and under 100%.

I appreciate the idea, but I think there are always assumptions. Like you did not explain what the median is because this is hn. I like the standars of the economist, always saying what an acronym is on first usage, and what a company is (Google, a search company). What they dont do is say: Google, like a box where you enter what you want to find and points you to other boxes. That would be condescending for its readers I believe. It is a matter of taste, and not objective, I guess.


I don’t think “vast majority” has a rigid definition, but I’d put it closer to 95% than 51%.

For example, in the senate passing with 51 votes is a “simple majority”.


This is all highly personal, so just banter'ing, but:

I agree there's no clear definition but 95% is even beyond "overwhelming majority" to me (with overwhelming being greater than vast). I'd call that "near totality".

Maybe, at least for US contexts, "vast" should line up with "filibuster-proof"? Eg 60-65%? 75% at most.

Of course, then that doesn't tell me anything about what it should mean in other contexts.


I think you're unaware of how vast vast is!

Personally, I feel vast is used to refer to things that 'appear limitless' e.g. vast desert, or when describing easily bound things - like percentages - to be almost complete.

Looking around it seems there is some debate on this, but it tends to end up suggesting the higher numbers:

https://en.wiktionary.org/wiki/vast_majority - puts vast as 75-99%

https://news.ycombinator.com/item?id=39222264 - puts vast as greater than 75% (I can't tell if the top comment is a joke or there really is some form of ANSI guidance on this).

But to find a more compelling source I've taken a look at the UK's Office for National Statistic's use of the term. While they don't seem to have guidance in their service manual (https://service-manual.ons.gov.uk/) a quick term limited search of actual ONS publications show:

* https://www.ons.gov.uk/peoplepopulationandcommunity/birthsde...

- "The vast majority (99.1%) of married couples were of the opposite sex"

- "In this bulletin, we cover families living in households, which covers the vast majority of families. " - this is high 90's by a quick google elsewhere.

* https://www.ons.gov.uk/peoplepopulationandcommunity/housing/...

- "The vast majority of households across England and Wales reported that they had central heating in 2021 (98.5%, 24.4 million)."

* https://www.ons.gov.uk/peoplepopulationandcommunity/birthsde...

- "The vast majority (93.0%) lived in care homes."

This seems to put vast in the 90%+ category. There is certainly more analysis that can be done here though, as I have only sampled and haven't looked at the vast majority of publications.

(this was fun, I don't mean to come over as pedantic)


I think your username checks out. :D

Apparently I underestimated vastness.


I am not sure the post said "vast majority" originally, to be fair. Is there a way to check?


Write for your median reader, and the bottom half will stop reading you. Problem solved.


That would be writing for most users but barely. I think there’s a fair reason they said “vast majority” instead.


the


Sounds like a you problem


All of OPs posts in that thread are blatantly Chat GPT output


Because.. em-dashes? As many others have mentioned, ios/mac have auto em-dashes so it's not really a reliable indicator.


It’s so annoying that we’ve lost a legit and useful typographic convention just because some people think that AI overusing it means that all uses indicate AI.

Sure, I’ve stopped using em-dashes just to avoid the hassle of trying to educate people about a basic logical fallacy, but I reserve the right to be salty about it.


I find adding some typos and 1 or 2 bad grammer things lets you get away with whatever you want


> 1 or 2 bad grammer things

1 or 2 bed gamer things


Several things:

1) Em-dashes

2) "It's not X, it's Y" sentence structure

3) Comma-separated list that's exactly 3 items long


>1) Em-dashes

>3) Comma-separated list that's exactly 3 items long

Proper typography and hamburger paragraphs are canceled now because of AI? So much for what I learned high school english class.

>2) "It's not X, it's Y" sentence structure

This is a pretty weak point because it's n=1 (you can check OP's comment history and it's not repeated there), and that phrase is far more common in regular prose than some of the more egregious ones (eg. "delve").


You sound like a generated message from a corporate reputation AI defense bot


The catholic church also helped Nazis escape germany after WWII: https://en.m.wikipedia.org/wiki/Ratlines_(World_War_II)


That says it was some individuals of a giant organization not policy of any subset of it


>Why dont americans create 16 pound smartwatches?

Because labor is much more expensive in America. This is not a mystery


Labor is a factor but it helps to have the insane manufacturing synergies they have where almost all of the parts are made down the road from you.


But American companies make their smartwatches in China.


I don't think that's enough to explain it. What's the ratio of labor costs?


$15/hour in the US vs $2/hour in China


That would put an upper bound of 120 pounds for the USA manufactured watch with zero materials cost and all labor.


Yeah, highly inflated e-mail job economy does that to you.


Don't worry god emperor Trump will fix that.

No more safety and environmental regulations. Children can work full time. Union bosses get sent to the gulag. Forced labor camps for the homeless and criminals.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: