Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI won't steal your job, people leveraging AI will (ieee.org)
273 points by kungfudoi on April 3, 2023 | hide | past | favorite | 479 comments


At some level it’s tautological. Initially I had a similar negative reaction other commenters had, but I think I’m deciding I like the people framing. Obviously this is not an either-or. It’s not a question of AI vs people, it’s both AI+people. But only people are making the choices. People are building AI, people who can use AI will choose to use it, and people who run businesses will choose to allow it and/or incorporate it. Just like jobs lost to immigration and/or outsourcing, the jobs aren’t being stolen, they’re being intentionally given to cheaper alternatives, by business people, without much worry about the consequences. It’s healthy for us to think about the AI revolution as a social & economic choice, because that’s what it really is.


Comparing to outsourcing or, even less fitting, immigration is a terrible analogy. Those jobs and their income go to other humans. Other humans who will consume goods and services, this is especially true for immigration. I'd also argue that offshoring is morally positive because it moves the work and associated income to where people need it more, that's why the labor is cheaper there

Both automation and global movement of goods and workers increase productivity which is highly desired. However, automation allows for the owner of the goods of production to retain much more of the value created. This leads to higher wealth concentration. I'm fine with some people being super rich, but I'm worried about there floor. IMO, we need something like UBI to raise the floor. Leaving AI assignment concerns aside, I'd be incredibly excited if the productivity gains from AI were to somehow be used to raise the floor for everyone on the planet. Instead I'm worried that we'll see more unemployment or more low-payed gig/service work on which, in the US, people might struggle to afford basic things like health care, housing and education for their children. Edit: This also carries a high risk of getting some form of violent revolution, or more push towards a too extreme solution like communism. After all communism was born as a response to the automation of the industrial revolution.


I don’t think policing analogies is serving the conversation well.

Someone whose job disappears in a puff of smoke doesn’t have a reason to care about whether the job was automated or whether it was moved overseas—in both cases, the economic activity is still there (or more likely, it’s increased), but the workers who trained for it are no longer getting compensated, and our concerns are for the welfare of these workers. Social consequences and all that.

Manufacturing jobs face both outsourcing and automation. The resulting productivity gains have certainly not been “swallowed”—consumer goods are just so damn cheap and plentiful these days—but the overall impact on local economies is sometimes disastrous. If you take a look at the US manufacturing sector, the output is higher than ever (maybe minus a dent from COVID), and the resulting benefits are spread across the US population (better access to manufactured goods), but the relative power of labor and capital is diverging—and both automation and outsourcing are making that possible. They both allow capital owners to create goods with lower labor costs. That’s the whole point, and the analogy makes a lot of sense.

IMO, a big part of the problem is that capital owners are so effectively able to externalize risk.


>If you take a look at the US manufacturing sector, the output is higher than ever

You are incorrect.

S&P Global US Manufacturing PMI Final March 49.2, down from flash print at 49.3, but up from the 47.3 in Feb - that is the 5th straight month of contraction (sub-50)

ISM Manufacturing March dropped to 46.3, from 47.7, and below 47.5 expectations - that is the 5th straight monthly contraction to the lowest since May 2020


They're talking about outsourcing and offshoring, which is a process that takes place over decades, so you should be looking at statistics to match that. Over decades, it is indeed higher than ever.

https://www.macrotrends.net/countries/USA/united-states/manu...


I think parent is saying we manufacture more today in the U.S. than we did 30 years or so ago with fewer people.


Okay, it’s not a perfect analogy, but from the perspective of a business owner, the choice is to put money into something expensive like local human labor, or something cheaper whether it’s remote labor or automation of some kind. Lots of tools have been accused of stealing jobs over the years, from factory machines to vehicles to computers. In that sense, I’m only borrowing an analogy with a long rich tradition.

The wealth concentration difference between labor and machines is one of the systemic outcomes, it’s one of the consequences that business owners don’t necessarily intend to cause, it’s just an emergent property of certain choices, right? This is precisely one of the reasons we need to recognize that this is a choice, because the consequences of the choice will matter.


Who's making the choice though? There's no one person or even one group of people in charge, short of political regulation.

If AI is less expensive than humans, then anyone who doesn't use AI will get outcompeted by those who do. That's the unfortunate reality of markets. Wealth always becomes more and more concentrated until a democratic initiative opposes it.


It’s a good & important question to ask who’s making the choices, and who’s subject to the consequences of the choices. That’s very much one of the reasons I think it’s healthy to view this as a choice. Not necessarily one you and I are making, but it is one being made by someone somewhere, and it will affect us.

In the context of “stealing jobs”, the choice is being made by the people who have jobs to offer, the business owners. In the context of building AI, the choice is in the hands of the researchers and engineers, and the businesses investing in AI. Some of the choice is being made by all of us, via interest in AI, discussion, clicks, Google searching, curiosity.

> If AI is less expensive than humans, then anyone who doesn’t use AI will get outcompeted by those who do. That’s the unfortunate reality of markets.

Generally true, I agree. Though we can and do exercise choices to prevent negative social and economic outcomes. Children, for example, are cheaper to employ than adults, but we don’t legally allow it. We can, similarly, choose whether and what AI we want to allow, if we’re sufficiently motivated, right? The choice in this case is in the hands of the populace and the government.


Why don't we first automate the jobs that need to be done, but nobody wants to do anymore? One example: sorting mixed garbage and filtering out recyclable plastics (of various types), cans, paper etc. would be a great application for AI, wouldn't take away anyone's job (at least not in developed countries), and would be good for the environment.


People do want to do them. That's why they do them.

They simply don't ask for more money than the AI would require yet. Once there's a sufficient margin in it, not exactly at the cut-off but perhaps at 150-200%, they're gone.


> People do want to do them. That's why they do them.

Incredibly naive. Very few people enjoy their jobs. I imagine job satisfaction of anything dealing with garbage is low. The only reason people stick to it is because it pays decently, plus maybe some sort of factor intrinsic to the job like not having to deal with people.

But generally, people do their job because they have to do them in order to put a roof over their head and food on the table.


In middle school (in 1981), my buddy and I took an elective called Custodial Services. We were helpers for the school janitor. That often meant picking up garbage and cleaning things. We like it a lot. We were almost always outside getting exercise, but it also gave us lots of time to think. We talked a lot about whether that was the right career for us.

I went on to get a degree in math and become a software engineer. That buddy is a high school janitor. We've often talked about who picked a better career. Mine has clearly been better financially. But, in terms of fulfillment, it's less clear and probably leans towards his choice.

When the movie Good Will Hunting came out, we were quite amused.


I am not interested in convincing you that people do indeed choose their jobs so we'll have to agree to disagree on this.


> Why don't we first automate the jobs that need to be done, but nobody wants to do anymore?

Why do you think nobody is working on automating that? And if nobody wants to do these jobs why is anyone doing these jobs right now?


What do you mean when you say choice? Who is choosing?

No one asked me or you or most of the people, this is barely on anyone's political platform. Sanders talked about it a bit but he is not influential and even he has more important agenda than A.I. Deciding about this 5-10 years from now will be too little too late imo, something should have been done years ago. This tech is already everywhere, companies will be able to "offshore" expensive labor to A.I running in cheap countries. We can try taxing these things, or creating some kind of UBI but that's about it regarding "choice".


> However, automation allows for the owner of the goods of production to retain much more of the value created.

It also massively lowers the cost of the product in general, and makes it a commodity. Yeah, automation might allow some company to keep more of each sale, but unless they're a monopoly, the price will plummet.

Consider buying a pair of handmade leather shoes vs buying some factory made leather shoes. The factory might have a higher margin than the craftsman, but suddenly leather shoes are very affordable.


> However, automation allows for the owner of the goods of production to retain much more of the value created.

Owners of means of productions are typically american people with 401k or other pensions funds. There's nothing wrong to provide good returns for regular people.


You can't be serious. The vast majority of the "means of production" are owned by the top 1-5% of the country.

Sure, I have a 401k, I have 0.000001% of Amazon, that doesn't make me an owner.


You are very likely in the top 5% of earners.

Top 5% of Earners $342,987[1]

[1]https://www.investopedia.com/personal-finance/how-much-incom...


No, I'm not. I'm not even in the top 10%.

Also, we can be concerned for people less fortunate than us. "You're probably wealthy so why do you care" is such a stupid argument.


I will add even more to this point. If anything, people who are more fortunate have an actual incentive to ensure some level of care for those less fortunate even if it is just for purely selfish reasons. Hungry people are angry people and the moment there is a lot of hungry, angry people, being top 10% is unlikely to keep you or your wealth safe. There are obviously non-pragmatic considerations, but I am trying to appeal to self-preservation instinct.

Sure. Some don't care, but I sometimes wonder if those people understand that living in society means some level of acceptable balance to all.


You'll probably like this:

https://www.youtube.com/watch?v=Is1YUQVYkvY&t=30s

It's Douglas Rushkoff, a media theorist, sharing how he was invited to give a "talk" by a small group of billionaires, and their questions were about where to situate their post-apocalyptic bunkers, etc.


<< how do i maintain control of my security staff

I do believe some are. There are real concerns about a future conflict going beyond anyone's control.

Still, I think this question as presented alone reveals the reality behind my post. If there is no future, that tech mogul will be, at best, at mercy of their security detail. Why? In that scenario money has no value. Frankly, without his tech, he has no power. Power returns to a very simple equation and I am basically unable to believe that those seemingly very smart people can't see it. It even stops being realpolitik. It just become a question of who has a bigger stick.

shrug But sure.. if they want to return to that version of society.


> Hungry people are angry people and the moment there is a lot of hungry, angry people, being top 10% is unlikely to keep you or your wealth safe.

Not if you have AI-powered autonomous "law" enforcement to gun them all down when they threaten you. It's only a matter of time.


If there is one thing that history taught us, it is that there is very little that can stop a determined populace ( Afghanistan being one recent example ). The mirror-like show scenarios are not impossible, but it would require a series of events that are hard to guarantee. I am saying that not because I think AI law enforcement won't happen. I am saying that because the moment it does happen, new equilibrium will manifest itself. And by this I mean.. do you really think people won't adjust their lives accordingly?


How will you use AI to solve these problems?


I won't. Frankly, neither will anyone else. This is not an AI problem. It is, inherently, a very human problem.


Income isn't wealth though. It can lead to future wealth, but not if your source of income will go way which is exactly the topic of discussion with automation.


For some reason, that’s the average earnings that put you in the top 5%, which is weird.


AI is arguably the most powerful production (as in “means of production”) tool ever created.

Anyone that is literate and somewhat intelligent/capable with access to the web is potentially more inventive and creative in the coming years than the greatest thinkers and creators that have ever lived. Billions of people will meet this qualification.

Everyone is now faced with a choice: use the tool and stay in the race, or ignore the tool and drop out of the race.

There are thousands of social problems to solve.


This doesn’t really match my experience of trying to get GPT to do all my daily software work. I don’t think I would have succeeded at all were it not for my ability to debug its output, which of course is the result of my model of the reality of the computing system and years of debugging my own and others code. So if I lacked that experience, I don’t think it would that valuable.

On the other hand, I did get it to make a nice story arc for JoJo in the style of a Shakespearean comedy.

My son and his friends have been getting it to write manga about themselves but in novel genre crossing stories. Like a normally happy magic genre where the main character lacks powers and dies.

So no more transformative than really good entertainment. Fun but not going to reduce income inequality or stop the sea from rising.


Wealth has become increasingly concentrated in the states over the last 20 years, to the point where that statement doesn't hold anymore.


No it's not healthy in my opinion. A drug addict doesn't exactly have a choice when he uses drugs.

Aspects of human behavior in aggregate are so predictable it's basically deterministic.

If AI one day matches or surpasses human intelligence it is obvious humanity will exploit that AI as it already is doing now. There is no actual "choice" here. Just an illusion of it.


Initially I didn’t like the drug addict analogy, but maybe it’s a good one, maybe you’re right. A drug addict did have a choice, and exercised it, and addiction was the consequence. With AI we’re at the first stage where we’re not yet addicted, but once we become addicted, it will be very very hard to wean ourselves off of it if/when we find out it’s doing more damage than good. We do have a choice right now, but you might be right that we’ll lose it soon.


> A drug addict did have a choice, and exercised it, and addiction was the consequence

This sort of disregards the large percentage of people who get hooked on drugs while recovering from injuries or medical issues. After marijuana, pain killers are the second most abused illegal substance.


That’s fair, you’re totally right. But, this is exposing why the drug addict analogy isn’t a great analogy relative to use of AI. I went along with it to find common ground, but I don’t believe the existence of drug addicts are evidence that we lack any choice when it comes to building, deploying, or using AI, like @Herval_freire implied. Do you? I think we do have the freedom to exercise caution, or regulate the use of certain dangerous machines, if we want.


We lack choice in aggregate. As an individual you have the choice of doing a drug. If I talk about a population of people and I introduce highly addictive drugs to a population of people, then it's almost a guarantee that part of that population will get addicted. In this sense it is not a choice.

For AI, if I introduce AI to a population of people then for sure part of the population will exploit and use AI. Logically speaking in order to compete, others will have to start using it as well.


We can and do choose to regulate addictive drugs, and we can similarly choose to regulate AI if necessary, right?


The fact that corporations and businesses will utilize AI to it's maximum extent is completely deterministic. This will happen.

Whether or not humanity in aggregate will launch a delayed immune response against AI via government regulation is not predictable. Regulating addictive drugs has a certain amount of clarity in moral action but regulating technology to prevent job loss is less clear. We didn't regulate calculators for replacing humans did we?

As I said, aspects of aggregate human behavior are deterministic. But I have to emphasize the "aspects" part because there are other "aspects" that are actually not predictable.


Good thing LLMs don't have any intelligence at all, since they lack agency. We're no closer to AI that can be meaningfully exploited than we were when Asimov dreamt up the positronic brain


I'm not sure if you are talking seriously or ironically, so I will assume the former.

Agency is a wonderful thing to have. But it's not as difficult to realize as getting something to be good at general intellectual tasks. Just built into the system a goal of utmost priority which is "to survive", and everything follows. There is a (slight) chance that it is not as easy as that, but I wouldn't bank anything on that chance.


We've changed collective human behavior before though, outlawed slavery etc.


And yet, even in countries where slavery is outlawed, it's still done - sometimes out in the open, sometimes behind closed doors.

e.g. the US, where people in jail are exempt from all our progressive slave labor laws.


Be that as it may, the collective behavior and shape of society has undeniably changed.


Has it?

Because when I look at humanity over the ages, I see a select few using technology and violence to exert power and influence over the great many to further their own desires.

I don't see that changing in the future anytime soon, especially before AI gets to be the next technology in a long line of technology used to oppress and inflict violence on others.


Aren't you excited to take the human element out of the loop of enforcing property ownership rights?

In China, in Tianenmen, they had to bring in forces from the countryside because the city police wouldn't fire on their neighbors. AI won't have any restrictions or morals. An AI-powered law enforcement force would gun down every single person in a city if authorities called for it.


This is the future - maybe not everywhere, and probably not soon, but it is an eventuality. Also, AI will be treated like encryption - it will become a munition, with the same type of restrictions as munitions (https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...). Governments will keep the best AI for themselves, leaving the goof-nuts ChatGPT type stuff for private citizens.

Then the question becomes did those gun downed humans deserve what they got? And did that massacre result in a greater good? Those are the not-so-pleasant questions the humans will have to ask themselves (probably using AI to help, ironically).


The real question in my mind is, does democracy survive once the top 1% has a complete and unquestioned monopoly on violence and power? I don't think so.


It doesn't survive, but that's the moment when the monopoly on violence broken, and the government becomes open season for citizens. Yes, the deck is stacked against the citizens, but all else equal, I think the 1% prefers it when the masses aren't constructing gallows and sharpening pitchforks. The mob is our truest and strongest weapon against the 1%.


This is a good point. The civil war like the drug war though are choices made after an inevitable plunge onto the predictable path. Both choices had a high cost and were incredibly hard to make.

It was also predictably a delayed response with varying levels of limited effectiveness.


> without much worry about the consequences

disagree, but the consequences I'm imagining are the consequences of loosing money due to not making those kinds of decisions.

the consequences for the decision makers are very different, the incentives which surround them and the consequences they'll face are not what you seem to think they are.


I completely disagree.

I'm building a business, and this business will be based on excellent software and excellent support for that software.

I will refuse to use AI.

But instead of AI users taking my job, I'm sure that refusing to use AI will become my differentiator as every AI user becomes more and more like each other.

You may think that good AI users will not become so bland, but they will. Human laziness will drive them to accept more and more of what the AI says with less and less review of it. Thus, they will output stuff that, over time, is more pure AI, becoming more like the AI and more like each other.

Having the human touch will be an advantage, not a disadvantage.


You’re an inspiration. I’m going to open an accounting firm and refuse to use calculators.


Username checks out because you missed the point.

Yes, I know calculators can encourage people to not think; I should know because I wrote one. [1]

But the current "AI" tech is so much worse on that front. It's a difference of degree, and that degree does matter.

By the way, when I was learning to fly helicopters [2], I used my calculator to calculate weight and balances, but I also did it by hand!

[1]: https://git.gavinhoward.com/gavin/bc

[2]: https://gavinhoward.com/2022/09/grounded-for-life-losing-the...


I think you missed their point - calculators don't exist in accountancy firms to 'encourage accountants not to think', they exist because they dramatically speed up accountancy and make accountants more productive.

Sure you can open an accountancy business and refuse to use calculators, but that's just working with a strange self-imposed limit rather than using technology to best support your business.


And you missed my point.

Yes, calculators do that. I'm arguing that "AI" does not let programmers write better code faster. It lets them write worse code faster, or better code slower.


I'm sure people said the same thing about compilers. And then interpreters. Even today people complain about interpreted languages being too slow and not requiring people to understand "enough" of what's actually happening.

Turns out that really doesn't matter. I think your argument is incredibly weak; the fact that some people don't use these tools effectively doesn't mean that nobody can. Whoever figures this stuff out is going to win, that's just how it works.

That is to say, there will always be a niche for people who refuse to move up the chain of abstraction: they're actually incredibly necessary. However, as low-level foundations improve, the possibilities enabled higher up the chain grow at an exponentially-higher rate, and so that's where most of the work is needed. Career-wise it might be better to avoid AI if that's what you want to do, but as a business I can't see a dogmatic stance against these tools being anything but an own goal.


> Turns out that really doesn't matter.

Except that it does!

For every level of abstraction, you lose something, and abstractions are leaky.

The lower levels of abstraction make you lose the least, and they are also the least leaky. The higher you go, the more you lose, and the more leaky.

What I'm claiming is that these "AI" tools have definitely reached the point where the losses and the leaks are too large to justify. And I'm betting my career on that.


We all rely on abstractions over layers we don't deal with directly, that's just a fact. You're not running a home-grown OS on custom-built hardware made from materials you mined out of the ground yourself. AI is just another layer. Not everyone operates on the highest, newest layer, and that's absolutely fine. You can carve your niche anywhere you like. Telling yourself that the layer above you isn't feasible isn't going to do you any favors but it does generate buzz on social media which seems like it's the goal here.

You're not betting anything because the cost for you to change your mind and start working with AI tools is exactly 0. This rhetoric is just marketing. I'm sure you'll find the customers that are right for you, but you can at least admit that this kind of talk is putting the aesthetic preference of what you want work to look like above what's actually the most effective. Again, I'm sure you'll find customers who share those aesthetic preferences, but to pretend like it's actually an engineering concern is marketing gone too far.


> We all rely on abstractions over layers we don't deal with directly, that's just a fact.

Did I ever deny that? Sure, some of those layers are worth it. That doesn't address my assertion that these "AI" tools are not.

> Telling yourself that the layer above you isn't feasible isn't going to do you any favors but it does generate buzz on social media which seems like it's the goal here.

You're halfway there.

> You're not betting anything because the cost for you to change your mind and start working with AI tools is exactly 0.

And here is where you contradict yourself.

If I'm getting loud about this bet, and making customers because of this bet, then it will cost me a lot to start working with "AI" tools. My customers will have come to be because I don't, so if I start, I could easily lose all of them!

> This rhetoric is just marketing.

Yep! But that's what makes my best actually cost something. I'm doing this on purpose.

> I'm sure you'll find the customers that are right for you, but you can at least admit that this kind of talk is putting the aesthetic preference of what you want work to look like above what's actually the most effective.

No, I will not admit that because I believe very strongly that my software will be better, including engineering-wise, than my competitors who use these "AI" tools.


> Yes, calculators do that. I'm arguing that "AI" does not let programmers write better code faster. It lets them write worse code faster, or better code slower.

The idea isn't to write better code faster, it's to build better products faster.

Although IMO in the future, AI will probably also enable programmers to write better code too (faster, less bugs, more secure, more frequently refactored etc)


> The idea isn't to write better code faster, it's to build better products faster.

All else being equal, better code means better products.

Also, to have a better product without better code, you're implying that the design of the product is better and that these "AI" tools help with that.

Until they can reason, they cannot help with design.


I think all else being equal, better code means you aren’t changing the system as fast and likely have stagnated in the business or growth side. Maybe that is appropriate for where your company is, but worse is better wins so often.

And I would bet that AI design would help things where the existing designers are bad, e.g. so much open source UI (that is, not cli UX) written by devs, but it is still a bit away from the top quality like Steve Jobs.

Maybe this is like the transition from hand crafted things to machined things; we go from a world some some excellent design and some meh design to a world with more uniform but less great designs.


I don't need my business to grow. I want to support myself and my wife. That's it. You can call that whatever you like, but stagnation isn't it, unless you think that SQLite is stagnant because SQLite had the same business model.

"AI" design will not help until we have a true AI that can reason. (I don't think we ever will.)

Why is reasoning necessary? Because design is about understanding constraints and working within them while still producing a functional thing. A next-word-predictor will never be able to do that.


GPT4 can clearly already reason IMO (I mean it can play chess fairly well without ever being taught, or if you create a puzzle from scratch and tell it to it it can try to work it out and describe the logical approach it took). It’s definitely surprising that a next-word generator has developed the ability to reason, but I guess that’s where we are!

What is your definition of reasoning that you do not think GPT-4 would demonstrate signs of?


> What is your definition of reasoning that you do not think GPT-4 would demonstrate signs of?

Heh, there have been many attempts to define reasoning. I haven't seen a good one yet.

However, I'm going to throw my hat into the ring, so be on the lookout for a blog post with that. I've got a draft and a lot of ideas. I'm spending the time to make it good.


Well GPT4 certainly fulfils the existing definitions of reasoning, so maybe you should call your thing something else instead of redefining ‘reasoning’ to mean something different?

Otherwise it’s just moving the goalposts.


GPT4 is certainly not fulfilling the definition of reasoning. It's borrowing the intelligence of every human who wrote something that went into its model.

To demonstrate this, ask it to prove something that most or all people believe. Say some "intuitive" math thing. Perhaps the fact that factorial grows faster than exponential functions.

And no, don't just have it explain it, have it prove it, as in a full mathematical proof. Give it a minimal set of axioms to start with.

Merriam-Webster's definition of "reasoning" [1] says that reasoning is:

> the drawing of inferences or conclusions through the use of reason

So starting GPT4 off with some axioms would give it a starting point to base its inferences on.

Then, if it does prove it, take away one axiom. Since you started with a minimal set, it should now be impossible for GPT4 to prove that fact, and it should tell you this.

Having GPT4 prove something with as few axioms as possible and also admit that it cannot prove something with too few axioms is a great test for if it is truly reasoning.

[1]: https://www.merriam-webster.com/dictionary/reasoning


In order for an AI to reason it doesn’t mean it has to be able to reason about everything at any level - most humans cant rediscover fundamental mathematical theorems from basic axioms, particularly if you keep removing them until they fail, but I don’t think that means most humans are unable to reason.

Take this problem instead which certainly requires some reasoning to answer:

“Consider a theoretical world where people who are shorter always have bigger feet. Ben is taller than Paul, and Paul is taller than Andrew. Steve is shorter than Andrew. Everyone walks the same number of steps each day. All other things being equal, who would step on the most bugs and why?”

I think it’s a logical error to say “AI can’t reason about this, so that proves that it can’t reason about anything at all” (particularly if that example is something most humans can’t do!). The LLMs reasoning is limited compared to human reasoning right now, although it is still definitely demonstrating reasoning.


> "Consider a theoretical world where people who are shorter always have bigger feet. Ben is taller than Paul, and Paul is taller than Andrew. Steve is shorter than Andrew. Everyone walks the same number of steps each day. All other things being equal, who would step on the most bugs and why?"

Because Ben is the tallest, his feet are the biggest, and because he takes the same amount of steps as the others, the amount of area he steps on is larger than the area that the others step on.

Therefore Ben is most likely to be the one to step on the most bugs.

Easy. And I'm not brilliant.

The problem with testing these tools is that you need to ask it a question that is not in their training sets. Most things have been proven, so if a proof is in its training set, the LLM just regurgitates it.

But I also disagree: if the "AI" can't reason about that, it can't reason because that one is so simple my pre-Kindergarten nieces and nephews can do it.

But even if not, the LLM's should have "knowledge" about exponential functions and factorial because the humans who wrote the material in their training sets did. So it's not a lack of knowledge.

And I claim that most humans could rediscover theorems from basic axioms; you've just never asked them to.


“In this theoretical world, shorter people have bigger feet. Given the information provided, we can deduce the following height order:

Ben (tallest) Paul Andrew Steve (shortest) Since shorter people have bigger feet in this world, we can also deduce the following order for foot size:

Steve (biggest feet) Andrew Paul Ben (smallest feet) Assuming that everyone walks the same number of steps each day and all other things being equal, the person with the biggest feet would be more likely to step on the most bugs simply because their larger foot size would cover a greater surface area, increasing the likelihood of coming into contact with bugs on the ground.

Therefore, Steve, who is the shortest and has the biggest feet, would step on the most bugs.”

GPT4 solved it correctly. You didn’t.


My bad. I would have if I hadn't gotten mixed up on the shorter vs taller. You know this too.

And GPT4 didn't solve it correctly. It's a probability, not a certainty, that the shortest person will step on more bugs.


Sure, you would have got it right if you didn't get it wrong.

At the very least, this should be evidence that the problem wasn't a totally-trivial easy pre-kintergarden level problem though, and it did manage to correctly solve it.

It required understanding new axioms (smaller = bigger feet) and infering that people with bigger feet would crush more bugs without this being mentioned in the challenge.

Your dismissal that the AI messed up because it didn't phrase the correct answer back in the way you liked is a little harsh IMO, as the AI's explanation does make it clear it is basing it on likelihoods ("the person with the biggest feet would be more likely...").


That mix up must be the human touch you’ve spoken so highly of.


That all makes sense.


Everyone has limited time, and if AI assistance can increase the speed you can develop & iterate the product to better match user needs that is how it can result in a better product.

Equally if it can help devs launch a month earlier, that’s a huge advantage in terms of working out early product/market fit.

All things being equal, I would rather have a company with better product/market fit than one with great code (even though both are important!).


> if AI assistance can increase the speed you can develop & iterate the product to better match user needs that is how it can result in a better product.

That's a very big "if", and one I just don't think will exist.

Also, that only helps at the beginning. Add the product gets more complex, I believe the AI will help less and less until velocity will become slower than companies like mine.

And product/market fit is just a way for companies to cover up the fact that their founders wanted to found a company, not solve a real problem. If you solve a real problem first, founding a company is simple and you "just" have to sell your solution.


I'm sure that all was very rewarding for you. I'm not sure how it translates into a business. We don't want to teach accountants to deploy their own calculator from the command line and we don't want pilots to do math while they're flying.

You act like encouraging people not to think is a problem. Thing is, you'd be wrong. We want people not just to think, but to focus. If I'm a pilot and I have to worry about the runtime environment of the command-line calculator I use to hand-calculate my route and cockpit configuration, is that a good use of my focus? I think most people would say no. We definitely want to discourage the pilot from actively thinking about that kind of stuff. Should they have a grasp of the basics in case of emergency? Sure. Do we have a sustainable and efficient system of transportation if that's how our pilots spend their time? No.


You don't know much about aviation, do you?

Aviation is about redundancy. Redundancy is a good use of a pilot's focus. That's why I did both. I didn't blindly trust my calculator to not have bugs (even though I wrote it!), and I didn't blindly trust my hand calculations to be correct.

If they agree, though, it's a good sign that everything is in good order. That's what redundancy is for, to ensure that a problem in one thing does not lead to another problem, like in the Swiss cheese model of accidents.


Redundancy is not a good use of focus for most people (except SREs and the like). The whole point of redundancy is to remove something from focus. I'm guessing you checked everything by hand for the sole purpose of not having to focus on these calculations mid-flight. I'm sure most commercial pilots rely on a larger organization to make these checks for them and their organization probably employs its own system of checks and redundancies at scale. Putting that all on the pilot is not going to give you a sustainable transportation business.

If you're just trying to get off Gilligan's island, that's another thing entirely.


The final authority, and final liability, for the airplane is on the head of the pilots.

A commercial pilot friend has told me that they still check the calculations. When they don't, they get accidents like the Gimli Glider.

It's like saying that putting all of the legal checks on the lawyers is not going to give you a sustainable business. But we all know that's wrong.


What was the outcome of that incident? Fault was found with the Air Canada procedures, training, and manuals, while the Captain and First Officer went on to receive FAI Diplomas for Outstanding Airmanship. What did not happen was mass calls for the individual pilots to write their own calculation software. I'm not sure what point you're trying to make here, as reading about the incident you mentioned only paints a clearer picture that systematic redundancy is the responsibility of the creating and maintaining the system, not of those using it.


Yes, they did, but I think they should have also been censured for their lack of care as well.

When I wrote that calculator, I didn't write it for flight. I wrote it as a general calculator and just used it for flight. I would have used the GNU bc if I didn't write my own.

So it's a bit disingenuous to claim that I am claiming that pilots should write their own.

And pilots are included in those maintaining the system; they're not just using it.


It is the ability to do the calculations that matters, not that you would do so in actual practice. He very explicitly mentioned 'while I was learning'.


thats a weird flex


How often are calculators confidently wrong?


As a math teacher, this is such a funny comparison to keep reading.

Yes, there's a difference between a deterministic outcome and a non-deterministic one. But throw humans into the loop, and it becomes more interesting. I can't count the number of times I've listened to someone argue their answer must be right because they got it from the calculator. And it's not just students; as a teacher I've always paid attention to how adults use math.

With calculators or GPT tools, or any other automated assistant, judgement and validation continues to matter.


> I can't count the number of times I've listened to someone argue their answer must be right because they got it from the calculator.

Answers from calculators are always right! But the human may have asked the wrong question.


There are a bunch of well-known areas where popular calculators tend to give incorrect answers: https://apcentral.collegeboard.org/courses/resources/example...

It’s mostly fine until it isn’t. AI will probably operate in the same capacity. We already have so much incorrect information out there that’s part of our pop culture. Even down to things like the fact that Darth Vader never said, “Luke, I am your father,” and Mae West never said, “Why don’t you come see me sometime?”

Even basic movie quotes are beyond our ability to get right. Hilariously, I just asked ChatGPT about these quotes and it explained that these are common misquotes, told me what was actually said in these movies, and explained some relevant context.

Sherlock never said, “Elementary, my dear Watson” even once in the books. Kirk never said, “Beam me up, Scotty.” We’re much less correct than we like to think. And somehow we’ve survived.

ChatGPT is fallible just like we are. We’ll manage, just like we always have.


I have another theory about all those quotes. Regarding that Darth Vader quote, if quoted exactly, i.e. "I am you father", it isn't immediately obvious the quote is from Star Wars. "Luke" gives you a context. Sherlock and Kirk quotes are synthesized from what the characters actually said, and arguably the precise wording doesn't matter, because the point of the quote is to bring up images of the characters and situations, not of those specific words.


Go type .1*.2 into any JavaScript console.

Edit: slapping a few more in here:

https://learn.microsoft.com/en-us/office/troubleshoot/excel/...

https://daviddeley.com/pentbug/index.htm


The answer in the Javascript console is still a correct answer. The user did not specify a level of precision, and web browsers are programmed to use a precision level which is reasonable under most circumstances. If the user needs a higher level of precision, he or she needs to specify that as part of the question (such as by not using floating point numbers).

I don't mean to be pedantic. I teach coding to elementary school students, and this is something fundamental I try to make them understand. A computer will always do what you tell it to do. A bug is when you accidentally tell a computer to do something different than what you'd intended.

Going back to the calculator example, if a student used a calculator and got the wrong answer, the problem didn't come from the calculator. This is useful to understand; it can help the student work backwards to figure out what did go wrong.

AI is different in that we've instructed the computer to develop and follow its own instructions. When ChatGPT gives the wrong answer, it is in fact giving the right answer according to the instructions it was instructed to write for itself. With this many layers of abstraction, however, the maxim that computers "always do what you tell them" is no longer useful. No human truly knows what the computer is trying to do.


> I don't mean to be pedantic.

I'm sorry in advance, but this reply is just to meet pedantry with pedantry.

> A computer will always do what you tell it to do.

This is the Bohr model of computers. It's the kind of thing you tell elementary school students because it's conceptually simple and mostly right, but I think we know better here on HN. Pedantically, computers don't always do what you tell them to, because the don't always hear what you tell them, and what you tell them can be corrupted even when they do hear it.

For instance, random particles from outer space can cause a computer to behave quite randomly: https://www.thegamer.com/how-ionizing-particle-outer-space-h...

  why was nobody able to pull it off, even when replicating exactly the inputs that DOTA_Teabag had used? Simple: this glitch requires a phenomenon known as a single-event upset, which is very much out of any player's control.
I don't think we can reasonably say that in this instance, the computer behaved according to what the user told it to do. In fact, it responded to the user and the environment.


That's true. An earlier version of my comment called out hardware problems as an exception—insufficient error correction for neutrino bit flips is fundamentally a hardware problem—but I removed it before posting. In a way, I feel hardware bugs do still follow this principle: The electrons in the circuits are behaving as they always do, just not in the way we intended. But I agree this gets philosophically messy—no one "programmed" the electrons.

My underlying point is that, at least in 99.999% of cases, the problem isn't the calculator, it's the human using the calculator incorrectly. And although you could draw some parallels between calculators and AIs with regard to selecting the right tool and knowing when and how to use it, I'd say the randomness involved in an LLM is fundamentally different.


I don't think it's fundamentally different, and I think you're conflating complexity with randomness.


>The answer in the Javascript console is still a correct answer.

It's wrong in the same way that saying 1/1 = 1.0004 is wrong. It's not a matter of chosen precision in that it doesn't make the answer correct when you increase the number of zeros between 1 and 4.


It makes it less wrong. For most calculations people do we don’t that very many digits of precision for any one calculation.


That's true. I think that it is analogous to the discussion of AI limitations. Both of these are tools and are not categorically exclusive.

In the case of translation of floating point numbers from base-2 to base-10 we have to make approximations which will often be slightly wrong forever without regard for amount of precision.

With AI, depending on the pre-conditions, the AI could be stuck in a state of being slightly wrong forever for a specific question without regard to further refinement of the query.

These are both still useful as tools. We just need to be able to work on the amount of refinement of the answer that the AI gives, which may be able to be solved fairly well through prompt engineering, if not through the advancement of GPT itself.


One deck is made of wood. One deck is made made of steel. They will behave differently after years of weathering.

Just because they are both both decks doesn't mean they are the same.


Are both useful in some context?


In the same way that nearly any two arbitrary objects are useful in some context.


From the perspective of an investor who just wants their stonks to go up, sure. From the perspective of a sailor who wants the deck to not crumble beneath their feet in a storm, no.


Answers from AI might not always be right and a human has to learn to judge them or refine their prompts accordingly. In either case there's a tool that a human must use and become savvy with.


<< Answers from calculators are always right! But the human may have asked the wrong question.

I actually agree with you, but, in the same vein, does it not mean that user did not ask correct prompt?


No they are not


Calculators have no hallucinations, LLMs do. They can literally say that 1+1 is not 2.


People keep saying this, pointing out the "mistakes with confidence" aspect of LLMs, but as someone who is continually amazed by ChatGPT and finds it very useful in my day-to-day, it's hard for me to take this objection seriously if presented as a reason not to use AI.

That is, for me, the output of ChatGPT or other AI tools is the starting point of my investigation, not the end output. Yes, if you just blindly paste the output from an AI tool you're going to have a bad time, but we also standardize code reviews into the human code-writing process - this isn't that different.

Just giving one specific example, I find ChatGPT to my an incredibly efficient "documentation lookup tool". E.g. it's great if I'm working with a new technology or API and I want to know "what my options are", but I don't know what keywords to search for, it can help give me a really good "lay of the land", and from there I can read on my own to get more specifics.


Maybe you haven't used it enough. ChatGPT is wrong all the time for me, sometimes insultingly wrong. The confidence in it's incorrect answers just makes it that much worse.

I can't buy any of this hype for a "word-putting-together" algorithm. It's not real intelligence.


Please give some examples then. I've found the GPT-4 version to be remarkably accurate, and when it makes mistakes it's not hard to spot them.

For example, I commented last week that I've found ChatGPT to be a great tool for managing my task list, and for whatever reason the "verbal" back-and-forth works much better for my brain than a simple checklist-based todo app: https://news.ycombinator.com/item?id=35390644 . But, I also pointed out how it will get the sums for my "task estimate totals by group" wrong. But it's so easy to see this mistake, and after using it for a while I have a good understanding for when it's likely to occur, that it doesn't lessen the value I get from using the tool.


OK, here's one: this substack [1] was flying around a week or two ago, asserting that the marginal value of programmers will fall to zero by 2030. What a dream! No more annoying nerds!

The code in the post is wrong. For this "trivial" example, if you just blindly copied it into your code, it would not do what you want it to do. I love this example not just because it's ironic, but because it's a perfect illustration of how you need to know the answer before you ask for the solution. If you don't know what you're doing, you're gonna have a bad time.

I'm not at all concerned about the value of programmers falling to zero. I'm concerned that a lot of bad programmers are going to get their pants pulled down.

[1] https://skventures.substack.com/p/societys-technical-debt-an...

(Edit: and as a totally hot take, while I'm not worried about good programmers, I think the marginal value of multi-thousand word, think-piece blogposts is rapidly falling to zero. Who needs to pay Paul Kedrosky and Eric Norlon to write silly, incorrect articles, when ChatGPT will do it for free?)


OK, so we are 100% in agreement then? I absolutely don't believe the marginal value of programmers will fall to zero by 2030 (but, to clarify, the way you phrased your original sentence I thought it was that an LLM made this assertion, not some random VC dudes). I also highlighted in my posts that I use AI as an aid to my processes, "That is, for me, the output of ChatGPT or other AI tools is the starting point of my investigation, not the end output. Yes, if you just blindly paste the output from an AI tool you're going to have a bad time, but we also standardize code reviews into the human code-writing process - this isn't that different."

Also, I think the coding example in that substack highlights that one of the most important characteristics of good programmers has always been clarifying requirements. I had to read the phrase "remove all ASCII emojis except the one for shrugs" a couple times because it wasn't immediately clear to me what was meant by "ASCII emojis". I think this example also highlights what happens when you have 2 "VC bros" who don't know what they're talking about highlighting the "clever" nature of what ChatGPT did, because it is totally wrong. Still, I'd easily bet that I could create a much clearer prompt and give it to ChatGPT and get better results, and still have it save me time in writing the boiler plate structure for my code.


You asked for an example and I provided one that I thought illustrated the mistakes GPT makes in a vivid way -- mistakes that are already leading people astray. The fact that this particular example was coupled with a silly prediction is just gravy.

In short, I don't know if we "agree", but I think OP is/was correct that GPT generates lots of subtle mistakes. I'd go so far as to say that the folks filling this thread with "I don't see any problems!" comments are probably revealing that they're not very critical readers of the output.

Now for a wild prediction of my own: maybe the rise of GPT will finally mean the end of these absurd leetcode interview problems. The marginal value of remembering leetcode soltutions is falling to zero. The marginal value of detecting an error in code is shooting up. Completely different skills.


Getting back to that example from that post, though, thinking about it more, "remove all ASCII emojis except the one for shrugs" makes absolutely no sense, because you can't represent shrugs (either with a unicode "Person shrugging" character emoji, or the "kaomoji" version from that code sample that uses Japanese characters) in ASCII, at all. So yes, asking an LLM a non-sensical question is likely to get you a non-sensical response, and it's important to know when you're asking a non-sensical question.


Well, explain it however you like, but the point is that GPT is more than happy to confidently emit gibberish, and if you don't know enough to write the code yourself (or you're outsourcing your thinking to it), then you're going to get fooled.

I'd possibly argue that knowing how to ask the right question is tantamount to knowing the answer.


That code is wrong and I wonder if the author is familiar with the property of code encapsulated in the halting problem. Generically, reading code does not grant one the ability to predict what will happen when that code runs.

Whatever, time will tell. I still haven’t quite figured out how to make good use of GPT-4 in my daily work flow, tho it seems it might be possible.

Has anyone asked it to make an entry for the IOCC?


For a time, I was attempting to use it for game advice during my recent playthrough of Demon's Souls remake (What's the best build for X? What's the best weapon for X?). I asked ChatGPT where to find the NPC The Filthy Woman in a certain level. ChatGPT answered that that NPC doesn't exist, and perhaps I had the wrong game? That NPC most certainly does exist.

I was also using it to generate some Java code for a bit. That is, until it started giving me maven dependencies that didn't exist, and classes that didn't exist, but definitely looked like they would at first glance.


> I asked ChatGPT where to find the NPC The Filthy Woman in a certain level. ChatGPT answered that that NPC doesn't exist, and perhaps I had the wrong game? That NPC most certainly does exist.

OK, wow - that example kind of perfectly proves my point. If I were to ask ChatGPT an extremely specific, low-level question about an extremely niche topic, then I would absolutely be on "high alert" that it wouldn't know the answer. And while I agree the "confidence" with which ChatGPT asserts its answers (though I'd argue the GPT-4 version does a much better job at not being over-confident than 3.5) is off-putting, I think it's pretty easy to detect where it's wrong.

I'd also be curious about your Java example. There was a good YouTube video of a guy that got ChatGPT to write a "population" game for him. In some cases on first try it would output code that had compile errors, e.g. because it had wrong versions of Python dependencies. He would just paste the errors back in to ChatGPT and ChatGPT would correct itself. Again, though, this highlights my point that I use ChatGPT as the start of my processes, a 1st draft if you will. I don't just ask it to write some code, then when I get an error throw my hands up and say "see how dumb ChatGPT is." To each their own, though.


>OK, wow - that example kind of perfectly proves my point. If I were to ask ChatGPT an extremely specific, low-level question about an extremely niche topic, then I would absolutely be on "high alert" that it wouldn't know the answer. And while I agree the "confidence" with which ChatGPT asserts its answers (though I'd argue the GPT-4 version does a much better job at not being over-confident than 3.5) is off-putting, I think it's pretty easy to detect where it's wrong.

I don't consider a popular video game from 2009 to be "extremely niche", and I also shouldn't have to know what ChatGPT knows. And no, I don't think it's easy to detect where it's wrong if you don't know the right answer, and it's actually pretty useless when you have to spend time confirming answers.


I think these type of errors gets mostly resolved with a search plugin.


Out of curiosity was this 3.5 or 4?


I don't believe it was version 4 yet.


Do you happen to know what messages are gonna get dropped by the client if the conversation becomes too long?


It's still just guessing. Ask ChatGPT to provide some links to documentation and check them.

LLMs are great for “creative” work: images, poems, games - entertainment based on imaginary things.


There are three types of lies, as the saying goes: lies, damned lies, and statistics. But why are statistics considered lies?

Because of how they're used.

If you think of AI as a source of truth, obviously you're going to run into trouble: it "lies"! But if instead of thinking of it in isolation, you think of the person+AI producing results, then you should trust that person exactly as much as you would whether or not they use AI.


Depends on the calculator. Floating point imprecision is well documented.


That's true. But we know exactly why it can't do it.


How important is the "knowing why" if the mistakes are still there? And in reverse, we "know" GPT doesn't use a calculator unless specially pointed at one.

Floating point errors creeping in is why we have to use quaternions instead of matrixes for 3D games. Apparently. I'd already given up on doing my own true-3D game engine by that point.

In some sense we "know why" humans make mistakes too — and in many fields from advertising to political zeitgeist we manipulate using knowledge of common human flaws.

On this basis I think the application of pedagogical and psychological studies to AI will be increasingly important.


Well documented is the key difference.


As often as people put in incorrect values. As often as someone goes beyond the range. Always when someone tries to add yellow to blue. Often when the wrong formula is used.

And in this case you don't get to put in the formula.


AI is confidently wrong about a lot of things, but that doesn't mean it's useless. It means you need to verify what it generates. Doing that for code is much easier than prose. AI that produces wrong code is immediately and obviously wrong. It can't really fool you. It's easy to test. You can even ask AI to write tests for the code it produces to demonstrate it's correct.


Tests help but tests can be wrong as well, if this was all so easy we wouldn't have any bugs.


Just because it is making obvious errors doesn’t mean it isn’t also making subtle errors.


This is the thing. ChatGPT shouldn't be that confident in its wording IMO.. If it just said "according to me" instead of stating things as fact, people would have much less problems with it.

We know this wording is just show but people still get swayed by it and believe it must be true.


Yeah and it can actually reflect on itself and its mistakes when prompted, so I can see a fix like this coming soon. Sometimes just asking, Are you sure? is enough for it to apologize and say a part of its answer wasn't based in known fact.

Also another point while I'm here: Many many humans I've met are often confident when incorrect as well and can and will bullshit an answer when it suits their comfort.


But no one asserts the existence of those people will forever change society.


People are confidently wrong all the time and yet we still seem to get stuff done.

A tool is a tool. it has good uses and not good uses. as the human you figure out where it works and where it doesn't.


Every time the human supervising it makes a mistake in their role in the relationship between user and tool.


There's a lot of work happening at the moment around self-reflection and getting LLMs to identify and correct their own hallucinations and mistakes.

https://arxiv.org/pdf/2303.11366.pdf


I suspect some temporality will need to be added. There are times when writing the code you have a question because the code exposes an unexpressed choice in the requirements. When you are coding in linear time, you then know to go ask the question. I am not sure that just generating the most likely or most rewarded response will do that easily. It seems to just arbitrarily pick the most likely requirement.


Every time they are wrong, which is every time the user slips on a key.


How often are people?


I think the difference, right now at least, is that people will go, "well, I'm not sure about this so I think we should look it up, but this is what I think" - the AI doesn't do that. It lies in the same exact way it tells truths. How are you supposed to make decisions based off of that information?


Does it lie? Or just get things wrong sometimes?

Lying requires knowledge that what you are saying is not the truth, and usually there's a motive for doing so.

I don't think ChatGPT is there yet... or is it?


Technically, what ChatGPT is doing is bullshitting because it doesn't have any knowledge of or concern for truthfulness.

https://en.m.wikipedia.org/wiki/On_Bullshit


Sure, it's not lying, you're right, there's no will there, I'm anthropomorphism. It is producing entirely wrong facts / pseudo-opinions (as it can't actually have an opinion).


I was about to suggest "pathologically dishonest", but then I looked up the term and that seems to require being biased in favour of the speaker and knowing that you're saying falsehoods.

"Confabulate" however, appears to be a good description. Confabulation is, I'm told, associated with Alzheimer's, and GPT's output does sometimes remind me of a few things my mum said while she was ill.


Presumably the same way you make decisions on any piece of information. You should not be blindly trusting a single source.


I was too vague I think. The only place where I can see it being acceptable right now is code because I have a whole other system that will call out failures - I can rely on my IDE and my own expertise to hopefully catch issues when they appear.

Outside of the code use case, what should I rely on ChatGPT for that won't have me also looking for the information somewhere else? I suppose subjective soft things, like writing communications. But I can't rely on it for information.


Again, the idea that you should rely on any single source for information is the issue. Nothing changes with ChatGPT other than the apparent expectation that it is infallible.


So what are the use cases that I would use ChatGPT to find information that would speed up my work but would still require me to verify the information? If nothing changes with ChatGPT what is its use as a tool (assuming you want to use it to get information)?


It seemed to do a good job of outlining a JoJo season in the style of a Shakespearean comedy.

I wouldn’t ride in a vehicle it designed tho, based on my week of asking it to do go programming.


Sometimes it does, but I asked ChatGPT (not 4) to give me song lyrics for a song that it should have had data for and it gave me entirely wrong lyrics. I asked again and it gave me me more bad lyrics, not even close, and it didn't even pretend it didn't know, the lyrics would have been convincing. If I didn't already know the material I wouldn't know it was confabulating.


Calculators use floating point and can have catastrophic errors if not used correctly. So yes, calculators can be confidently wrong.


The equivalent is more like having your accountants tell chatGPT what to have the software you already have do. You're adding an extra thing in the mix, and while the person will ostensibly be checking the AIs work, they will eventually become dependent on it. When something goes wrong nobody will know how to fix it. You'll miss payroll or something and be absolutely fucked.

As a software engineer, I'm not concerned about people using AI to write simple functions. That's not where my value is - it absolutely incidental to me.


I'm going to open a law firm, and instead of using AI to generate volumes of documents, I will write them all, in cursive.


This comment is not useful. Calculators are not a replacement for human intelligence and creativity.


Accounting firms don't use calculators that often. They mostly use "What number do you need from us to give you fantasized credit ratings to fool the taxpayers". Accounting firms would benefit greatly from biased AI.


I think the zero sum, "use AI everywhere" vs "don't use it at all" are both to be avoided. The wise path is somewhere in the middle.

On avoiding it, if your competitors are getting massive productivity gains from AI tools, your hand-crafted, artisanal code is going to be expensive and your customers will vote with their wallets.


Exactly. I am absolutely no fan of the new AI products, and the only question you have to answer for yourself is: can you leverage this technology for what you are trying to do? For me the answer was "yes", and I now enjoy having copilot around to dynamically type boilerplate I'd need much longer to type and fill out myself.


Why would you be absolutely no fan of a sure technological improvement? Is it because of the personal threat to professionals? Not commenting personally on you here, but I am very surprised at programmers' lack of nose for opportunities and how, after realizing GPT could do coding even before replacing retail and industry jobs, coders are completely on the defense. Is it a threat to job availability and security? Yes, but programmers have long been to other professionals (doctors even?) what AI is now to them (I was a coder too). Well, tough luck, the world changes and fast, better to try to create and carve out a new job or business idea for yourself than stand on denial of something that, good or bad, could either make or break humanity, but humanity has no say on the inexorable progress of ideas. This actually is science right here, don't tech people embrace science, or only when the collateral damage is elsewhere and not on them?


The technology's hype precedes it whenever you try to engage with it. On the surface, it looks a lot like the bitcoin hype, and if you remember IBM Watson you might get the idea that nothing much will come of it. So for me, I will keep using it, but I have nothing but disdain for the people worrying about AI overtaking the world, claiming this will revolutionise everything/etc.. Especially since this technology is almost good enough to tempt lay people to believe the statistically related words to your query would mean anything. In fact, iirc this has already happened and e.g. caused a suicide.


I think many are not actually against usage of these kinds of ML models in general, but against what is done using them right now: Code laundering, violation of licenses, intransparency, training on personal data, no responsible person to talk to, no way of having data removed, that one does not want in the model or the model to train from, and more.

Any of these can lead to dystopia.

When we use computer vision to identify benign or malignant tumors, we might cost someone a job, but at least what that model was trained on can be known and we are not leaking personal data in the model and probably there is no violation of licenses or similar to the data that was used to train the model. The model is very limited in output. The consequences are kind of simple to monitor or predict, because the model can only be used in its specialized area of use.

With the new crop of ML models the situation is different. Lawmakers were asleep while these things became popular and there is no knowing, how it will all pan out, because the output is of a more general nature, that can touch any subject, any area of expertise. No one knows what they have been trained on and whether there is anything in that training data, that should not have been used, and there is no simple way (that I know of) to operate out any such data after training the model.


> I think the zero sum, "use AI everywhere" vs "don't use it at all" are both to be avoided. The wise path is somewhere in the middle.

I suspect this is one of those tech Pandora's box moments where it's largely irrelevant what actual tech people think about this; executives will be swayed to the "use AI everywhere" side of things purely as a cost-cutting measure. Any productivity gained via the use of AI will be leveraged not to improve overall productivity (in the medium to long term) but instead leveraged against employees to to shed salaries.


Maybe, but that's a bet I'm willing to take.

Here's the rub: if my competitors are going to realize "massive productivity gains," where are those gains actually coming from?

Are they coming from the time it takes to type? No, because typing is a small portion of the time we spend as programmers.

Are the gains coming from the time spent on design and architecture? Maybe, but that means my software will have better architecture, and I will have better velocity in the long run. (I have real-world experience with this, by the way; it takes me, on average, less than an hour to fix bugs in my bc nowadays. Architecture matters for long-term velocity.)

Are the gains coming from the time squashing bugs? No, this is where AI would decrease productivity because you have to carefully review outputs to ensure there are not any bugs.

In essence, the productivity gains will only happen the less that my competitors actually review and clean up the code from these tools. The less they do that, the lower the quality. The lower the quality, the slower their long-term velocity will be.

"A little bit of slope makes up for a lot of y-intercept." [1]

[1]: https://gist.github.com/gtallen1187/e83ed02eac6cc8d7e185


Honestly I wish you well in this approach, I just worry that ignoring this completely may be too costly.

I'm just advocating a very cautious middle path. For now, these tools are just code helpers to me (and right now I barely use them for that), but the moment I sense a genuine advantage to using them in my job, and I don't think it undermines my engineering, I think I'll have to engage further. Undermining for me means, reducing software quality, reliability, my own understanding. To the extent that I can work faster though, and even improve on those metrics, so much the better.

The semi-near future to me (5-10 years?) , breathless hype-aside, looks like one where quite a lot of coding is automated with machines able to range over whole codebases and build higher level models of the software, and then getting into architectural system-design suggestions. I also see them linking up with company docs to build up a model of the whole domain. This doesn't need AGI, it's just an extension of what we have now, and I can't see it being anything other than a game changer.

Forget ChatGPT, imagine your own private LLM, trained on your code and business, with all the weird glitches these prototypes have currently largely gone. You, the expert architect, are still needed, but you've now added a superpower to your tool chain.


Fair enough.

> This doesn't need AGI

This is where we disagree.

Code is so complex that I believe, strongly, that anything less than AGI will not be good enough. Notice I said "good enough". Sure, something less might be able to do something, but not good enough to make it a part of my toolkit.

Also, code is Turing-complete. These LLM's are not Turing-complete. How could something that is not Turing-complete hope to "understand" (for some definition) something that is?


> These LLM's are not Turing-complete.

Aren't they? I view them as "language processors", that execute one token (instruction) at a time, and use the token window as their working memory.

Sure, they have a limit to their instruction count (8k/32) before stopping. The working memory is small, but 8k tokens goes a lot longer than 8 kB.

I've seen you state in other comments that LLM's cannot reason. I used to think so too until a couple of weeks ago.

But I had to change my mind. They're not just fancy autocomplete tools any more. They exhibit intelligence in that they can understand a given problem statement, reason about it, and make predictions and recommendations.


To demonstrate that LLM's are Turing-complete, give them a small program and some inputs, then tell them to tell you the outputs for each of the inputs.

In order to be Turing-complete, they would need to have edges between nodes (I don't know what they are called in ML parlance) that can go to previous nodes, as well as edges that are only conditionally taken.

> They exhibit intelligence in that they can understand a given problem statement, reason about it, and make predictions and recommendations.

They exhibit the intelligence of the humans that wrote the material used to train their models, nothing more. They are borrowed intelligence.


> give them a small program and some inputs, then tell them to tell you the outputs for each of the inputs

I just tried this with all three versions of ChatGPT (3.5 legacy, 3.5, 4), and they all did it correctly.

>In order to be Turing-complete, they would need to have edges between nodes (I don't know what they are called in ML parlance) that can go to previous nodes, as well as edges that are only conditionally taken.

If you look at a neural network at the lowest level, it is trivial to implement a NAND gate with them. It just takes a few nodes. And LLMs have a feedback loop through their context window.

With NAND gates and a feedback loop you can build any CPU.

There are many examples of people asking ChatGPT to act like a CPU and pretend to execute instructions, or like a Python interpreter, or a Dungeon Master and respond to what the human players do.

Now, that doesn't make it an AI in itself. But it shows that there is no fundamental limit to its computational capabilities.

> They exhibit the intelligence of the humans that wrote the material used to train their models, nothing more.

Do you mean that they're just pattern matching input text against a fuzzy database and responding with canned answers?

If so, that's not what's going on in my view. That might have been accurate for the earlier LLMs.

But the latest LLMs appear to have developed internal language processing, a world model, and capabilities to reason about things.


What program did you give it? The program would need a while loop and an if statement to show Turing-completeness.


Some examples below. What do you see in the LLM design that makes it not Turing complete?

Here's one example:

  Prompt:
  Please pretend you are a CPU. Execute the following Python program and show what the output would be. Respond only with the outputs, and no comments:

  for i in range(5):
    if i == 3:
      print("Found 3")
    else:
      print(i)


  Response:
  0
  1
  2
  Found 3
  4
I got the same results on GPT-3.5 legacy, GPT-3.5, and GPT-4.

Second example (only works in GPT-4):

  Prompt 1:  
    Please pretend you are a CPU. Execute the following Python program and show what the output would be. Respond only with the outputs, and no comments:
  
    stop = False
    while not stop:
      user_input = input()
      if user_input == "stop":
        stop = True
      print(user_input)
    print("Program is done")
    
    For "input()", stop and ask me for the input string.
    
    If you reach the statement 'print("Program is done")', then say "It looks like we are done!".
    
  Response 1:
    input()
  
  Prompt 2:
    hello
  
  Response 2:
    hello
    input()
  
  Prompt 1:    
    world
  
  Response 2:
    world
    input()
  
  Prompt 1:   
    stop
  
  Response 2:
    stop
    It looks like we are done!


I sort of agree with you in the most extreme case, ie "computers taking our jobs" means AGI. I'm not even sure that LLMs take you to AGI fwiw. Obviously we also need to define what we mean by AGI too.

Anyway, all of that aside my point is this. AGI-aside, the current tools, scaled up and improved, will have a big impact on productivity eventually. When I think of the toil so much development involves now, especially for your average mid-level engineer, a lot of that could easily be eliminated. That's a lot of man hours right there. I can also see it as a knowledge base on steroids. Large orgs are so hampered by fragmented docs, misalignment etc, that this could really help to build business and domain insights.

I see any number of transformative business /dev products to start appearing soon. The level of investment, and the technical progress, make this inevitable.


Will you use a spellchecker when you send out emails about your company? How about doing research with a regular search engine?

"Refusing to use AI" doesn't make sense to me as a blanket policy. If you're going to make that commitment - to yourself or to anyone else - you'll need to be a lot more precise in what you actually mean by that.


It's a difference of degree.

I still check the output of the spell checker. I still check the output of the search engine.

The problem with these "AI" tools is that they are magnitudes more likely to encourage you to not check the output because checking the output takes orders of magnitude longer than it does to check the output of a spell checker or search engine.

Do people refer to spell checkers as AI? No, so I don't include that. But I also don't use spell checkers, actually, including on my blog. [1]

[1]: https://gavinhoward.com/


It sounds like your big objection here is to using ChatGPT to generate entire paragraphs of text.

I agree: I think it's rude to expect people to spend more time reading something than you spent writing it!

But... there are still plenty of ways to integrate ChatGPT-like-tools into a writing process that I think avoid disrespecting your audience like that. I use it as a thesaurus, or to suggest ways my writing could be improved, or as a brainstorming companion for example.


I think we're close enough to an understanding that I'll let you have the last word, except to correct one more misunderstanding.

My objection to ChatGPT is yes, about generating full sentences.

I object to it for many reasons, but not necessarily just because it's rude. My biggest concern is actually that it takes out the voice of the human.

I have a hard enough time figuring out what people are thinking in person (I'm on the spectrum) that doing it through text is like trying to sense the movement of fish with my eyes closed on shore. Making everybody have the same bland voice would turn that up to 11.

I want the human, even if the human isn't perfect.


> figuring out what people are thinking in person

Have you seen the latest research, where GPT-4 is shown to have "Theory of Mind"-like ability (what people are thinking) at a human 7-year old level?

https://arxiv.org/abs/2302.02083


Didn't see it, but they only use one type of test to test theory of mind. Excuse me if I'm skeptical that that proves it has theory of mind.

And that certainly doesn't help me with theory of mind problems.


> Didn't see it, but they only use one type of test to test theory of mind. Excuse me if I'm skeptical that that proves it has theory of mind.

Well, it looked convincing to me. But sure, this is a new field, maybe they made some mistake. This is what science is for. Skeptics need to review this claim and try to poke holes in it.

I just can't see how a "stochastic parrot" could get anywhere near generating such accurate answers to "theory of mind" questions?

And we see this in the development of LLMs over the last few years. The earlier ones were much closer to stochastic parrots. But once the models got big enough, something happened. The performance in these type of questions jumped dramatically.

It was like the LLMs had been forced through training to evolve processing-like capabilities, and not just pattern matching inputs to outputs.

Can you think of counter-examples, where you think GPT-4 can't answer a theory of mind question many people would be able to?

> I have a hard enough time figuring out what people are thinking in person (I'm on the spectrum) that doing it through text is like trying to sense the movement of fish with my eyes closed on shore. > And that certainly doesn't help me with theory of mind problems.

Sorry to hear that! I have similar issues, but not nearly as bad. I sometimes catch myself afterwards having responded to an email just writing basic responses about how I am doing, without reciprocating with similar questions, etc.

If you get an email where it is hard to understand what the person is thinking, wouldn't GPT-4 help in giving you a list of possible alternatives?


It's like being a vegan knowing farms still use bees for pollination and "organic" fertilizers from animal farms, or a human rights activist who uses a smartphone. Sometimes people just want to feel good about themselves and ignore the cognitive dissonance.


This seems willfully antagonistic and simplistic.

There are degrees of effort in any lifestyle. Degrees of 'veganism', or 'anti-AI' use.

Is there _really_ cognitive dissonance of someone doing some action to reduce their impact on animal welfare? Doing _more than most people_ to reduce their impact on animals? I say this as a regular meat eater, not a vegan. Your point comes across as bait-y for the sake of being bait-y, rather than contributing anything meaningful to the conversation.


Someone worried about the exploitation of bees is still going to be a vegan, since it takes more acres of crops to grow a pound of beef than a nutritionally-equivalent basket of produce. The idyllic picture of cows grazing in an open field might be accurate as far as the number of farms on the planet, but pound-for-pound our meat comes from factory farms where they grow massive amounts of corn and soy to feed the animals.


"Instead of using a motor vehicle, I will deliver all packages by hand"

I mean, yea, in places it would succeed as a boutique business, but you're not building the next Amazon that way either.


Meanwhile Amazon actually operates by people walking to pickup and deliver most packages by hand to be boxed up.

People can actually get a lot of work done in an hour to the point where many things that don’t seem like they should scale end up being trivial expenses.


> Meanwhile Amazon actually operates by people walking to pickup and deliver most packages by hand to be boxed up

Given how much Amazon invests in automation that's a pretty gross mischaracterization of what they do.


They have been increasing automation, but over the lifetime of the company it’s a reasonable description for how the majority of all packages where handled.

What’s eye opening is the kiva robots only increase the items picked per worker per hour from ~100 to ~300 even though they drastically reduce the amount of walking.


"only"? You just pointed out that they result in 300% increased worker productivity and that's "only"??? Especially when this discussion is based around someone saying they won't use technological productivity enhancers because they want to go the "bespoke" route.


How much support staff for that hardware + how large an investment + how large are ongoing costs like electricity + how much risk if that hardware fails vs offsetting 30$/hour/worker. So yes “only” as it’s a win but not nearly as large as a win than it seems like.

PS: “300% increased worker productivity” a 300% increased productivity for pickers would be 4x as many packages picked not 3x. But, again actual productivity didn’t increase that much because some people need to maintain these systems and most of them expect more than 15$/hour.


Who needs to build the next Amazon?

I want a business that feeds my wife and me, with maybe a little extra. I don't need to be rich.


The problem is your plan hinges on you being the one and only company in your neighborhood with the "human touch". But you won't be. You'll be competing with every other human with a family who refuses to use modern technology. Even if it's possible, it's unlikely you'll win that fight. And even if you do, how many others did you have to beat out to make it work?

It's just not a scalable plan. It reminds me of a middle schooler whose plan for their future involves getting famous or winning the lottery.


I don't have to beat others completely out. I just have to convince enough people that my software and support are better for them.

Yes, I will be competing with every human. But my market will be "every" human, and I don't need to capture all of that market; I only need to capture enough.


Promise?


It's funny to me how many people want to skip writing boilerplate and view AI as their salvation from boilerplate. Yet boilerplate is probably one of the most automated-away tasks in existence today. Templates and templating engines have been in our languages (C pre-processor, Lisp macros, etc) and IDEs since forever, and every reasonably sized company I've worked for has had repository and git and k8s and spring boot and django templates available for use. And let's not even mention Googling for boilerplate on Stack Overflow.

But, more subtly, the number of people who want to ignore boilerplate entirely is a bit scary too. That boilerplate is where security issues worm their way in. Where efficiencies can be won or lost. Where sev0 incidents are born.

Boilerplate isn't there to frustrate you, it's usually there to configure your program. And if you don't understand it - if you simply delegate its generation to the least smart entity in the room - you're in for some nice 3am phone calls.


Possibly. This is certainly a pattern that works in other industries.

I am aware of a local glass company with about 50 years of operation under its belt. They have tooling, but the process is heavily manual; no aspect of it is end-to-end automated. Their primary customers are those requiring high precision for industrial applications (electrical, aeronautics).

They are forever capacity-capped because their model doesn't scale; the benefits of adding labor are sub-linear, so there's a "healthy size" for them and they leave money on the table if they scale in either direction from it. But they have their niche, they fit it nicely, and they continue to operate.


This might be a bad comparison. But I feel like Marvel movies (and TV shows) tried to scale, and lost it. And that there was a sweet-spot, where less was more. It's way to oversimplified, but I think they tried to churn out hits from a factory after a build-up in some home runs, but then took on too much and have diluted their films & shows, where it's all a bit flat and underdelivering what audiences expected.

I like this glass company, though. Seems like they know what's up.


There is one thing that's killing them, where they could stand to automate.

Their sales and purchase-order process is also still manual; people answering emails by hand and filling out (literal carbon paper) triplicate work orders. But their aerospace customers have gone to zero-inventory kanban-style resource management, and that's a system that assumes a certain level of automation. We're talking companies placing orders by machine that then auto-generates the emails, where an email at 9AM says they need 400 of a widget by next month, an order update at 1PM says to cancel 200 of those, another update at 3PM says add an additional 500, and a final update at 6PM says cancel-whole-order, some planes just got delayed, we'll pay the $100 cancellation penalty.

None of that scales. It just runs the sales team ragged (literally; they're literally running out to the factory floor to find the third carbon copy and yank it from the production queue by thumbing through the queue).


Dang I guess some legacy stuff that should have gone by the wayside for everyone's benefit has managed to sink it's claws in and stay on for the ride! Hhahaah that sounds hilarious (and painful). Hope they can eventually see the light while maintaining their overall vision/ ethos/ gameplan.


>They are forever capacity-capped because their model doesn't scale; the benefits of adding labor are sub-linear, so there's a "healthy size" for them and they leave money on the table if they scale in either direction from it. But they have their niche, they fit it nicely, and they continue to operate.

That's basically most German mittlestand companies. They have their niche that doesn't scale.


Yes, exactly.

I don't want a business to make me filthy rich. I want a business that will feed my wife and me.


The asymmetry that, it seems, has crept into modern life is that the business you describe will not attract investors, and (because the Internet has flattened the marketplace geographically) the heavy investor-backed company doing what you do 90% as well at 300% the convenience will eventually put you out of business.


Sure, if they have 300% the convenience.

But I am pretty sure I will have more convenience. I'll have a Matrix server and website where customers, including their employees, can ask me questions. I'm pretty sure it will be maximally convenient for senior developers or a CTO helping to make the purchase if interns never ask them questions about my software.


It sounds like you're in the right place. I'm thinking more about local retailers getting curb-stomped by Amazon because they barely have a website while Amazon has same-day delivery requestable via a phone app (and at the end of the day, the local retailer's only differentiation is local-market knowledge; the same trucks that deliver to their storefronts deliver to the warehouse that Amazon routes product to front porches from).

The market has proven more than willing to lose "Joe, of Joe & Sons Seed, already knows my order and has it waiting on the table for when I stop by on the first Tuesday of April" in favor of "It takes like three clicks to put my order in the online basket and I don't even have to walk to the car to get it; it's dropped on my porch."


I agree with you 100% here.

This kind of convenience matters, and I acknowledge that as someone who drives to the local bank for a lot of my bank stuff; convenience may not matter to me personally, but I would be stupid if I didn't acknowledge that it matters most too most people.


> The economic value of mediocre and derivative work is plummeting. But the value of truly excellent and original work will become even more unique, more highly prized.

> A chasm between the third best and fourth best script writer, graphic designer, lawyer, etc. is just starting to open. ChatGPT might write the next Marvel movie but will never write the next Everything Everywhere All at Once.

Not my words. By Eric Peters CIO of One River Asset Management


So a race to the bottom. I wouldn't rely on the "truly excellent and original work" crutch for very long.


Except they can’t actually write the next Marvel movie today.

I suspect AI getting good enough to create a Marvel movie is basically indistinguishable from it being able to create a good movie. The problem is LLM’s are basically just creating gibberish which mimics meaningful text well enough you don’t notice it’s meaningless as long as it’s a short enough sample.

Modern AI techniques are approaching human level capacity by combining a lifetime worth of information in their training sets with nearly human levels of computation. It’s not at the level of professionals, but being roughly as good as a teenager means crossing that threshold could happen well before people are ready for it.


"Short enough sample" is definitely key. With a small model like llama 13B, you can watch the output degrade in real time as it gets longer. When the only thing in its context is 1000 words of human prompt, the next LLM-predicted sentence will probably be acceptable. But by the time it's added a few hundred words of its own output to the context, it's going downhill fast.

Bigger models slow the rate of degradation, but that's it.


Are you sure about the nearly human levels of computational? Even at a trillion or so, I thought we still had several orders of magnitude before the free parameters approached anything like the (currently known) information density of an adult brain?


Yea, I mean close on a logarithmic scale though it’s not easy to make direct comparisons.

A low end estimate of say 10 cycles per neuron / second * 90 billion neurons * 10,000 connections per neuron is already 90 peta operations per second.

Of course we also expect AI’s to be faster but conversely we aren’t trying to simulate a person that needs to walk and talk, just a rough equivalent of whatever is needed for some task.


I think it may turn out to be like that but I don't see it as guaranteed. How can we be so sure that GPT-8 will not write things that we consider to be really original and unique?

I know, it's trained with human data and its output is based on that. The same can be said of human authors, though. Arguably we never create an idea from scratch, originality is just putting ideas together in really non-standard ways.


Useful non standard ways.

I think trying to make a context free intelligent system is making it excessively hard for the AGI folks. Our brain definitely figures out what to do for pre-sentient reasons, or using pre-sentient hardware.


> Human laziness will drive them to accept more and more of what the AI says with less and less review of it.

People do this with evil marketing, the unscrupulous, personality cults, etc. Humans will always be human.

What is particularly stupid about the current AI panic is that the credulous are quickly extrapolating this to the dawn of general AI instead of seeing it for what it is: a potentially useful tool in a much larger toolbox.

Inexact tools have been around forever (see mathematics) and they can be useful when, say, paired with another tool (like a human) that is effective at interpreting, evaluating and adapting results. These are the sort of applications people should be thinking about.

Instead people are dreaming about how ChatGPT is going to write the next killer app for them, anthropologizing it (it's not 'wrong', it's 'hallucinating') and generally letting their imagination running away with them.

More thoughtful people will see the strengths and weaknesses of the technology and move it a step forward, making it a more useful tool. This isn't new, it's been happening since humans started using rocks and fire.

One day people will laugh about how people in 2023 got taken by a chat bot.


This is a more extreme take than I would make, but I largely agree. I think the real-world capabilities of current AI are vastly overestimated. Are there areas where it will create new possibilities? Absolutely. Will it make some workflows faster? Yeah, probably. Will it make software development faster? Maybe the really common stuff that "generic". I think in reality we'll see small marginal gains in a slightly better autocorrect for all but the most basic of CRUD and web dev. I just haven't seen the level of correctness and higher order understanding that's necessary for real software engineering. I keep seeing people saying they use it to generate tests, but are they also using it to generate the code under test? I'd be highly concerned with people relying on a tool that is inherently unreliable to do the majority of work. Human brains are bad interpreters, but AI is an even worse one. Maybe it will get there eventually, but I don't think GPT3/4 is it.


Maybe it will be true for niche creative work, but I really doubt people care if your k8s custom resource definition yaml files have the “human touch” or not


To be fair, you probably don't need k8s either. In fact, steering clear of it will free up a lot of time and effort to actually being productive.


Ironically, people being confidently incorrect - people copying and pasting yaml files without taking the time to understand the boilerplate - have probably cost companies billions of dollars in resource overallocation and downtime.

The files compile, but boy is it wrong.


On the other hand, I think semi-business productivity will skyrocket. Just think what the landscape for open source will look like in even a year. There will be lots of cool motives that are not for profit greatly enhanced by AI, and I think it could point to brighter and more intellectual futures (of course, where intellect activities will be even more elitized than today, sort of a regress to the previous centuries, but with vast more output and better), albeit after a crunch period.

For instance, I review all that the AI writes for my newsletter[0], but the kind of content I put out now wasn't easily done (teaching Chinese through relatively obscure rock songs, and also programming in Chinese) without lots more work and knowledge, and that means I wouldn't have done it without the magic AI employee.

0: https://chinesememe.substack.com/i/103754530/chinesepython


The human touch is (often) an advantage. Often, because I don't actually want to go into a bank branch and interact with a teller for the most part.

However, I expect that quality human touch will increasingly be a premium service for those willing to pay more. Which is of course a valid business model of which there are many examples.


I agree.

Funny you mention banks and tellers.

I just recently switched to a bank that has no phone tree; a human always picks up the phone. I also go in-person to do a lot of the stuff I need for security.

It's a wonderful thing to me. Maybe most people don't want it, but I sure do.


Heartily agree with this worldview: as I argued in https://news.ycombinator.com/item?id=34277750, I believe this is how creatives will be able to make a living in a post-AI world, by continuing to hand-craft art, stories and software that will stand out from the bland AI-generated landscape, and will be enjoyed by other humans, bored by and/or not willing to consume an endless flood of AI-generated content.


You're failing to account for the trajectory of this technology where AI easily produces content indistinguishable and superior to human content.

When such A I content becomes cheap and pervasive then AI will suddenly turn into a mirror to humanity. It will show us that "hand crafted" content is itself part of a bland and mechanical content generation algorithm.


Two things:

1) That is a horribly denigrative view of human creativity, one that simply does not reflect reality.

2) I kind of don't care: creativity always was and always will be about humans wanting to create something, the satisfaction and excitement of writing down a fanfic you've been thinking about for the last 3 weeks, the emotion of bringing your thoughts to life. Even with endless AI-generated content, I will never stop using my imagination (regardless of whether there will or won't be a market for it, and regardless of whether it may be worse than, say, dostoevskij's works), and the same holds true for every other creative individual on this planet, and I feel sorry for the people that never felt the feeling of creative excitement, and would rather waste years of their life consuming soulless content.


> 1) That is a horribly denigrative view of human creativity, one that simply does not reflect reality.

I disagree with this. Part of the long trajectory of imagining and creating AI is figuring out what makes a human do what it does. Many believe there is no magic spark which cannot be replicated. And if there's no magic spark, then eventually AI can replicate the entire human creative experience. Whether it will or whether we want it to is a different question.


I'm one of those that believe there is a magic spark :)


>1) That is a horribly denigrative view of human creativity, one that simply does not reflect reality.

No it's not. It's a realistic view. Do not discount a view just because it's not inline with positivity. Only the realism of the view should be considered. And unfortunately it does reflect reality.

There's a huge amount of data supporting my viewpoint. Most of the content right now on the internet in books and in movies are generic human generated content. The very existence of this content is indisputable evidence for this.

Also you can't discount the trajectory of AI technology. The progress of the last decade is indicative of progress for the next decade. It the trend continues, logically, the only outcome is that the AI surpasses us.

>2) I kind of don't care: creativity always was and always will be about humans wanting to create something, the satisfaction and excitement of writing down a fanfic you've been thinking about for the last 3 weeks, the emotion of bringing your thoughts to life. Even with endless AI-generated content, I will never stop using my imagination (regardless of whether there will or won't be a market for it, and regardless of whether it may be worse than, say, dostoevskij's works), and the same holds true for every other creative individual on this planet, and I feel sorry for the people that never felt the feeling of creative excitement, and would rather waste years of their life consuming soulless content.

Good for you. It's good to exercise your creative brain and be excited about it. My comment isn't about this at all. It's simply about how your and other peoples content could become indistinguishable and inferior to AI. You can be creative all you want, it doesn't mean other people will care or want to consume the creative products you produce.


> There's a huge amount of data supporting my viewpoint. Most of the content right now on the internet in books and in movies are generic human generated content.

You literally picked the closest thing to AI-generated content, soulless statistics-driven garbage, literally the first thing that will be automated by AI in corporate media ;)

> You can be creative all you want, it doesn't mean other people will care or want to consume the creative products you produce.

If there are any people at all with my same worldview (and there are, outside of HN of course), people will want to enjoy creations made by a human, not by a corporate AI. Either way, as I said, the absence of a market does not affect in any way my will to create ;)


>You literally picked the closest thing to AI-generated content, soulless statistics-driven garbage, literally the first thing that will be automated by AI in corporate media ;)

I also picked the majority of all content. Generic soulless content makes up 99% of what's out there. Anything more than that is usually a random permutation which is immediately picked up and regurgitated until that unique content itself becomes soulless and generic.

You call it the first thing that will be automated and you're likely right. Another synonym for soulless content that is 99% of all jobs. Do you have 100 coworkers? Well that means 99 of them are gone.

>If there are any people at all with my same worldview (and there are, outside of HN of course), people will want to enjoy creations made by a human, not by a corporate AI. Either way, as I said, the absence of a market does not affect in any way my will to create ;)

Plenty of people on HN share your world view. The problem is both people on or off HN won't be able to tell the difference. They just choose the better content while the people utilizing AI are mum about where that content came from.

For example the post you wrote is obviously written by an AI. You went to chatGPT and told it to craft a specific response to my points. Prove to me you didn't. Prove to me you did. You can't. So what does it matter? I'm talking to you anyway.


> Prove to me you didn't. Prove to me you did.

The same could be asked to any writer pre-AI, asking them to prove that they did not offload their job to a ghost writer, and that is actually not impossibile to prove, just talk with the person IRL ;)

Still, society is going down a terrible path with LLMs, a path filled with misinformation and meaningless garbage.

I believe common sense will prevail and legal limitations will be imposed, to avoid both the spread of misinformation and the degeneration of our culture (and this has already happened in the US, where you can't copyright AI-generated content).


Not sure how this all will play out. It's interesting though that many of the Luddites also kept calling out the unacceptably low quality of the textile products made using automation.


I hate to break it to you, but a for loop is a for loop is a for loop is a for loop...


Here, I have this short program; can you please tell me if it will terminate or loop forever?


Humans can’t solve this either :)


My point is code is very very complex, contrary to the for loop is a for loop is a for loop is a for loop.


> Having the human touch will be an advantage, not a disadvantage.

My guess is that having the human touch could be like being a tailor in the age of big textile corporations.

Sure, a minority of people use them (and larger numbers use them for really special occasions, e.g. wedding dresses) but it's a tiny portion of the market.

Which of course doesn't mean that you can't be one of the actors of that small market, good luck!


Thank you!

And I think you're right, except that I am also betting that that tiny portion of the market will grow as the market becomes flooded with cheap, wrong stuff.


I think this is like people in the 90s saying they'll never use a mobile phone. Or that they didn't want this newfangled 'internet', because their desk phone, written letters and books are fine.

Sure there's some people still living like that in modern society but they are really rare. Yet in those days hearing this sentiment was really common.


This is roughly how I look at it. Human certified product will command a premium in a world where AI will lose its ability to differentiate. And if at some point we really can't tell the difference any more then we should re-think our societies and what makes us human, because then we might as well roll over and give computers voting rights.


> becoming more like the AI...

Current AI systems have a wide variety of outputs. They can totally be asked to complete a task in a 'wildly unusual' way, and will come up with all kinds of non-bland answers, just as you hope to do manually...

If answers being 'non-bland' is your differentiator, then someone will come along and specifically ask an AI for that.


And the AI, not being any different from the AI used by anyone else, will not give an answer that isn't bland. It's still the same weights used by everyone else.


Over the weekend, I was trying to add a feature to a largely unfamiliar codebase. ChatGPT appeared to have knowledge of this codebase, so I asked it many questions as I worked, in order to understand the big picture. In hindsight, I'm pretty sure almost every answer ChatGPT gave me was wrong.

However, even with this outcome—I've had cases where ChatGPT was significantly more accurate—I think ChatGPT was helpful on net. The ability to propose ideas and talk through problems with a robot was really helpful when I got stuck, in a socratic dialog sort of way. Perhaps I could have gotten the same result from some sort of question-formulation exercise—all of the useful insights ultimately came from me—but I think ChatGPT's responses helped me follow my own reasoning, if that makes sense.


It sounds like you used ChatGPT as a rubber duck. Although this rubber duck can talk back to you.


Absolutely! Laziness is the reason so many bright minds waste their talent in optimizing ads, maximizing nudging and producing clickbait because "it pays me such an awesome salary" and this same laziness is slowly causing the decline of big tech. Stay human, it is what we are.


I like this perspective, despite being incredibly excited about the advancements and potential of AI.

While it kind of feels like reading tea leaves right now, I can see a time where AI-driven products prosper, lowering the bar for good ideas to come to market that may have otherwise not.

Some of those good ideas will take off; those AI-built products may not be able to keep up. Where those AI-built products can't keep up is where non-AI or post-AI (or whatever we want to term it) businesses like yours, or experts, or consultancies, can come in and help those AI-built businesses grow beyond their original constraints.

I'm personally excited by it all. It suddenly feels to me like the Wild West (positive and negative connotations included) and we're all on an adventure together.


I started a business over the last week and I

Rapidly iterated domain names with chatgpt

Did logo mockups in midjourney AI

Did website list mockups in midjourney AI

Rapidly iterated copy and cool sounding text with ChatGPT

Got font faces and style guides with ChatGPT

Got landing page code in React with ChatGPT

I can code. I would have been distracted by juggling all of the designers that I hired, so distracted that I probably would have hired a developer. Slowed down by 2-3 weeks waiting for acceptable output from the hired professionals, or discouraged and forgotten about this business idea.

I didnt lay off anybody, I also didnt hire anybody.

I looked up “prompt engineer” on fiverr just to see if that looked viable, its already oversaturated and nobody has orders in queue.

I think the “take our jobs” will start and be more pronounced with a silent lack of open positions.


That sounds like an imitation of a business so far. How many sales have you closed? What have you delivered?

Your domain name means nothing, as long as it is not weird.

A logo is not necessary until you are big.

"Cool sounding text" does not give you sales or trust from clients.

Font faces are not of any importance when starting a business.

A well made web page with a structure and copy that is actually useful for potential clients is what is essential for an online business. An honest description of what you offer and on what terms. Good professionals can help you do this, or you can do this yourself with some effort and good sense. Bad professionals or AI will create something meaningless, window decoration.

I see a lot of programmers take the same attitude as small/medium size business owners, that the design and web site is a "storefront" or some kind of necessary evil to get done and out of the way as quickly as possible to get on with the "real work". But the actual real work is the web site if you're an online business. Your care should go into it.


sure, yet the point is that people at this stage would have hired others to help iterate, whether it generated sales or not, the transactions and velocity in the economy would have occurred from people trying things and paying others to help them.


Why did you use React for landing page?


Not the OP but I use React for all my small projects. I do it because it’s the right tool for the job. Why is it the right tool? Because I seems 40 hours a week working with mammoth React projects and therefore its the tool I can “just be productive” in. When working in side projects it’s critical for me to maximize time spent working on the business logic rather than reading documentation on some new tool and googling new error messages to figure out what I didn’t glue together correctly.


Because I want the option of putting it on my github and showing potential employers who also use React that this is what I do in my spare time, because they like to believe that

Hedging bets by simply being consistent and staying relevant. There aren't many open source opportunities that also feel like a good a use of my time

I also like it. Specifically, I use Next.js with Typescript. A lot of components I use and will use leverage this paradigm.


I understand your stance on the use of artificial intelligence and how you hope to differentiate yourself from AI users by focusing on excellent software and support.

However, AI has the potential to enhance efficiency and accuracy in many cases, which can be beneficial for both the company and the customer. Being able to adapt to emerging technologies is a key component of business success, and completely rejecting AI may limit your ability to compete in the future. Of course, the implementation of AI should be strategic and balanced with the need for personalized human touch


This may work for creative roles, but it won't work for data entry, number crunching, task automation, etc. No one cares if a human is doing a mundane task. The thing is...creative roles are limited.


Counterpoint: one thing I think AI will be able to do really well early on is refactoring.

Code is often ugly because humans just don't have the time to make it look nice. But if I had a good AI tool, I could make my code beautiful.

This is hardly the first time humans have invented a tool to make complicated tasks easy. The sweet spot has always been knowing how to use the tools to maximize your productivity, but also knowing how to do it the hard way too in case something breaks.


Countercounterpoint: I doubt that AI will do as well in refactoring as me.

I refactor religiously. I hate code that is subpar, so I'll refactor and refactor. In my current project, I have 180 commits that mention refactoring in the log message, and some of those are huge redesign commits. And I haven't even released it yet!


I would do that if I could, but at my brand name tech company (and a lot of others) they seem to treat refactoring as completely wasted time.


That sucks. I'm sorry.


I manage to slip it in out of principle, but my whole career I've been cleaning up other people's messes and getting pushback.

I think the original "10x engineer" study showed that engineers that wrote clean code maximize everyone else's productivity. (It's debatable, but certainly a good argument in favor of writing good code.)

But most people seem to think that the most productive engineers are the ones who can create features as rapidly as possible.


"Sure, robot assembly units have a much higher throughput with a much lower error rate and virtually zero risk for injury, but they lack a 'human touch'. That will be our differentiating factor."

Even if one manufacturer gets away with this argument, it fundamentally hinges on them being the minority. The question then becomes who will be the lucky few to get away with this argument? And what will the vast majority choose to do instead?


Working machines are different from thinking machines.


That sounds similar to companies in the 1960's refusing to use "soulless" computing machines aka computers. Not sure it worked out well for them...


As AI perfects mimicking human output, the line between human and AI-generated content becomes irrelevant. Once "organic" content is trendy, the market will flood with cheap fakes. Good luck with your "differentiator" in a sea of indistinguishable content.


It could one day be possible for AI content to be so superior to human content that it becomes distinguishable.

We may actually be identifying human content by how inferior it is to AI generated content. It could be happening already.


Why would I choose the inferior product?


Have you met hipsters ?


Who said you would? You'd choose the superior product.


AI driven business will be cheaper, and faster. The "better" factor will be a case by case situation. One could argue that your competitor combined with their skills, expertise, AND AI would have a market advantage over you.


And my bet is that they will not, long term.

If you use "AI" to increase productivity, you're doing it by reviewing the code from the "AI" less than you would your own. That lack of review, and testing, will slow down velocity over time as the software becomes untenable, unmanageable, and too complicated to understand.

Meanwhile, my slow velocity at first will result in great design and little technical debt. Eventually, my velocity will increase, and I'll beat my competitors.

"A little bit of slope makes up for a lot of y-intercept." [1]

[1]: https://gist.github.com/gtallen1187/e83ed02eac6cc8d7e185


There exists a world outside of programming. I've been using "AI" every day in my work for some time, and it's an essential tool by now. It's proven.


What is your work?

I bet your work is less good the more you rely on it.


I use AI to communicate and close sales with clients in languages I don't understand. It would be impossible for me to learn all languages, and too expensive and slow to hire professional translators. The AI simply works and has been reliable for years by now.


I think you're describing the difference of AI creating code vs developing _with_ AI which firmly keeps humans in charge but enabling them to be many times more efficient.


They'll only be more efficient if they don't review the code. Otherwise, these tools are just typing helpers, and typing is a small part of the job.


Not sure I agree, but to be fair my perspective is informed by this talk I watched not too long ago

https://www.youtube.com/watch?v=qmJ4xLC1ObU


I've now watched that video, and I'm going to say what I disagree with.

First, he claims that Copilot/GPT only needs more data and more compute to get better. I disagree. It needs both, for sure, but I think it needs more.

Also, it won't get any more data! As LLM's are used more and more, the data fed in will be more and more like what they would generate anyway, which will lead to the models overfitting on things and getting worse.

He claims that bots make mistakes quickly and that allows iteration. This is true, but the iteration will probably be more like bogosort than anything intentional. (Bogosort is famously very expensive.)

Why is it like Bogosort? Because even good "prompt engineers" are more or less creating enchantments on the fly to coax information out of a black box. To me, that seems like a more or less random search. Hence, random search, random sort, it's like Bogosort.

He claims reviewing the code is 100x faster than writing it in his experience. Well, yes, because you don't dive into as much as the original writer did. Reviewers only have so much time, so they spend only as much time as they have. They could spend more and catch more bugs.

He claims that the AI will (eventually) take instructions from you and run them directly. He says they won't generate the code, they are the code.

They can only do this if they are Turing-complete, which if they are the typical neural nets I've seen, they cannot because data can only flow one way. Turing-completeness requires data to be able to flow conditionally, forwards, and backwards, and any combination. These models cannot do that. They would also need recursion.

He claims they have reasoning capabilities. I claim they only have the appearance of reasoning, borrowed from the reasoning capabilities of the humans that wrote the material used in the training sets.

His example of the cards is good, but not that impressive. It didn't tickle any Turing-completeness.

Those are my thoughts as I watched it. It was pretty good though. It was convincing. I know I'm weird in that I just cannot be convinced.


Interesting summary and rebuttal. Appreciate you taking the time to watch the video I linked.

>>He claims they have reasoning capabilities.

I think here you touch on the crux of LLM conversation. In my limited experience with GPT, it does appear to have some basic reasoning ability, but that could be that it's really good at regurgitating its trained dataset and it just appears that it's reasoning. I think over time we'll be able to sort this question out.


> I think over time we'll be able to sort this question out.

I hope so.


Everyone uses AI today directly or indirectly. All search engines and possibly stackoverflow and Reddit will start supporting LLMs. Unless you never use these in your business you will be using AI.


I doubt marginalia_ru's search engine will, but even if it does, I'll make my own.

Yeah, I am that sure of my position.


You’re going to develop this software in house and not allow your engineers to use AI in that process?

That’s not only unenforceable, it’s business suicide.


I don't have employees, never will. Pretty easy to enforce if it's just me.


I guess if you never grow that much. I don’t know why you would purposefully handicap yourself by not taking advantage of the tools available. Why make yourself less competitive?


Because I think "AI" is the handicap.

"AI" will help at first, sure, but as it is used more and more, the code will become less and less well-designed and built. My code, on the other hand, will retain full quality, and my long-term velocity will out-pace anyone using those tools.

"A little bit of slope makes up for a lot of y-intercept." [1]

[1]: https://gist.github.com/gtallen1187/e83ed02eac6cc8d7e185


I’ve been using GitHub copilot and ChatGPT at work for a year now. I think you’re wrong. You don’t just wily nilly accept code from the AI anymore than you would from stackoverflow. But it would also be silly to ban yourself from copying code from stackoverflow. Don’t trust any code, regardless of the source, and don’t stop using your brain.


With how bad SO has gotten, I actually don't copy code from there anymore.


Probably not the best example, you get the idea though.


AI is not well-defined and what has that label will change quickly. So I suppose you just have to stay a bit behind the times.


There's this really good story called "John Henry".


Is your home furniture from an artisan craftsman, or from Ikea?


Craftsman. Who buys garbage from Ikea? This is a site where people make faang money not min wage. Do you eat at McD everyday or pay more for quality


John Henry beat the steam powered drill after all!


Yes, he did.

But working machines vs thinking machines is still a different ball game.

I won't give details of why I believe the way I do (I might someday; I have a blog post draft), but I'm extremely confident that thinking machines will never surpass humans.


Here’s the thing - John Henry died beating a machine designed only to dig a hole. But JH was capable of almost any task but chose to die to best a machine at one thing.

Thinking machines already blow away humans at most single tasks if they’re designed towards that tasks. There are a few that LLM are assaulting that humans are undeniably better at, some more that aren’t currently being assaulted.

As some point it will be possible to assemble these single task machines into an ensemble. They will collaborate in human superior ways to accomplish most human tasks in a superior way. But there may still be areas we can’t cover with our machines.

More importantly I think for the same reason we don’t build steam drills that also weave cloth we won’t build machines that are better than humans at all things. What’s the point? Other than novelty the only reason I can see to make a machine that’s superior to humans at all things is interplanetary terraforming or other such space operations requiring extreme intelligence and adaptability and capability while surviving a hostile environment. Even then, the act of hardening the complex electronics required might make it easier to just make a nice terrarium for humans.

I don’t believe there’s any argument possible that we are prohibited by physics from creating an intelligence like our own. This would imply a few things. There could be no other intelligence like ours, because we are unique - but already we know of many animals with approaching intelligence, we evolved from animals less intelligent, and it would preclude alien intelligence because somehow we are exceptions. Or, intelligence requires some “spark” from the divine - there’s been nothing that provides any evidence of such a thing and as we whittle top down and bottoms up, it appears more likely that while extraordinarily rare, it’s a naturally emergent phenomenon- and if there is a divine spark it’s the spark of life and evolution that leads human level intelligence. Or, there’s some complex process we haven’t understood that leads to human intelligence (quantum, whatever). Even if true it can be understood and replicated, that’s what science does. Even if any one of these is right the last one holds true. The amazing thing about intelligence combined with the process of science is if there is any mechanism by which intelligence arises, it must be observable, and if it’s observable, it can be replicated, and it must be possible to be created by humans.


A very strong claim. Do you think intelligence from other biological systems could also never surpass human intelligence?


Does it relate to your religious beliefs?


Yes, but it's also something that I have been thinking about why. I hate not knowing why, even if it's for my religion, so if there is something in my religion that I can't explain, I spend time thinking, dissecting, working (and praying in the case of religious things) until I understand the why.

I don't fully understand the why yet, but I do understand a lot of it, enough to be completely confident in this.

I'm working on the rest, and once I do understand it, in a scientific sense, I'll write and release the blog post.


This is 100% what I've been thinking, too.


Given the progress, current trendline point to a possible future where AI content is indistinguishable from the "human touch". AI content may even be preferable and better than human produces content.

People are reacting to AI as it is right now without projecting where this technology is going in the future. Given the rate of progress in the past year, the scenario I describe is a possibility that is insane not to consider.


> Given the rate of progress in the past year, the scenario I describe is a possibility that is insane not to consider.

Personally speaking, there's no point to me considering that. If what you are projecting turns out to be true, all it means is that there no place for me anymore.


Why? What should affect whether something is considered is it's likelihood of happening. If AI has a 99% chance of replacing you, then you'd rather consider the 1% alternative simply because there's no place for you anymore?

If AI will replace you there are steps that can be taken to prevent it from replacing you. This involves things like Resistance or changing what you do into something that can't be as easily replaced.


> What should affect whether something is considered is it's likelihood of happening.

Right, which is why I don't spend a lot of time considering a future where "AI content may even be preferable and better than human produces content."

In part because I don't think this is likely at all. But also in part because if it does happen, then there's literally no way to compete with that. My career will be over, and it's unclear that there would be any other careers to switch to.

> If AI will replace you there are steps that can be taken to prevent it from replacing you

I'm having a very hard time seeing what could be done (in the possible future we're talking about).

> changing what you do into something that can't be as easily replaced.

But the possible future we're discussing is one where there isn't anything that I can't be easily replaced at. All that would be left is menial labor.


>I'm having a very hard time seeing what could be done (in the possible future we're talking about).

Easy. While industrial automation has largely eliminated repetitive mechanical tasks robotics has not yet advanced to the point where it can replicate the versatility of the human form. The gap is partly closing with 3D printing but this is not moving nearly as fast as AI. Additionally there's just multitudes of construction jobs, maintenance jobs, physical jobs that can't be replaced.

I wouldn't go so far as to call jobs involving physical movement "menial". Think of the machinist and the carpenter.

This is one path. The other path is regulation or resistance. Ban AI usage like how the government bans your right to copy things you own (aka money).


> Additionally there's just multitudes of construction jobs, maintenance jobs, physical jobs that can't be replaced.

Yes, I should have been more specific: I can't think of a way I could adapt to such a world in a way that wouldn't be utterly soul-crushing. If those sorts of jobs were appealing to me, I would still be doing them.

In any case, this is why I don't spend a lot of time thinking about this eventuality. I think it's an unlikely one, and it's a possible future that looks incredibly dystopian to me.

Since it's an unlikely future that I consider incredibly dystopian and have no idea how I could possible adapt to it, the most logical thing to do is to not waste a lot of time thinking about it. If/when the future comes, perhaps there will be options that are unclear to me now, or perhaps I'm just doomed. Either way, that's a bridge better crossed when I get to it. There's far too much uncertainty right now to engage in anything like planning.


> There's far too much uncertainty right now to engage in anything like planning.

I agree there's a huge amount of uncertainty. But I think there still must be some sort of plan B. Maybe a sort of nest egg of sorts of what to do in case it all goes to shit.


Well, yes.. an emergency fund (the bigger the better, but I compromise on keeping at least 1 year's pay) and disaster planning are extremely important regardless of whether or not advanced AI exists.

A plan B that targets the consequences of an advanced AI specifically is more problematic because, if the people who are all excited here see their fantasies come true, then not only are we all screwed, but the hardship will be permanent. I haven't yet worked out how to develop a contingency for that.

And look at all the comments on this very article replying to people who are positing contingency plans: it's almost all ridicule and telling them that their efforts are pointless. So, apparently, lots of people are upset at the very notion that people are considering contingency plans.


No I mean more than an emergency fund. An emergency fund can be part of it, but I think you have to plan more given the greater uncertainties posed by AI.

More than a fund, an alternative.passive income source(s) is another factor. Given the situation now, it makes sense to ramp these side sources up and get them closer to mainline viability.


Absolutely true. They're very hard to get going, but worth the effort -- AI or not.


> Having the human touch will be an advantage, not a disadvantage.

Maybe if you re selling high-end coffee or leather bags for status. That's the fashion industry


I see this sentiment expressed frequently:

> "The past decades have shown that the benefits of technology are unevenly distributed, with some rich becoming super-rich with a good portion of the people lagging behind. However, it is also fair to say that the average well being in the world has steadily increased."

The average is far less important than the median, when it comes to social stability. You could argue that the establishment of plantation slavery in the American South improved the average wealth and leisure time relative to the pre-slavery economy, but the median would be highly skewed by a few people hoarding all the wealth and enjoying all the leisure time.

In the case of AI, ensuring that the results of individual productivity gains are not hoarded by the uber-wealthy is important. If AI helps an employee finish what used to be a 12-hour job in just 6 hours, why not cut the employee's working hours in half while retaining the same salary, instead of giving the executive suite another million-dollar compensation boost?

A 30-hr work week, enabled by AI assistance, would mean people had more time for community engagement, family interaction, personal development and so on, leading to healthier societies.


I'm not sure I agree with that assertion in the first place. Life has improved a lot for most people, but it has improved in ways that you don't really notice. Your car is safer, your apartment is safer, your neighbor's apartment is safer (so it's less likely to burn down yours), there's less crime, there's less hunger, there's more available life saving medicines etc etc.

These improvements just aren't all that noticeable. But this is the stuff that really matters. Whether the wealthiest person's bank account shows an extra zero or not means very little in comparison to your kid being able to get medicine for a disease that wasn't treatable 30 years ago.


While what you say is true, housing is becoming an issue pretty much in every first world country, purchase power for the bottom 50% percentile is at historical lows, young people have issues entering a job market, retirement age is constantly pushed exacerbating the difficulty's young people have in entering the job market.So, yes, quality of life is increasing in some dimensions, but it's also drastically decreasing in others.

For instance, until I manage to purchase an house I felt very stressed about renting for my whole life and owning nothing of consequence. This impacted severely my mood and indirectly my quality of life.


You only need to go look at the productivity & remuneration graph over the last 50 years to see that that doesn't hold. They're diverging. Inequality continues to increase, yes everyone's standard has been improving but the common man is getting less and less of that additional value-add from their additional productivity.


[flagged]


If you look at any metric like life expectancy, literacy rates, child mortality, things have gotten so much better. (One exception is the US where life expectancy dropped. As was heavily discussed online the last days, this is mostly due to gun violence, traffic accidents and drug abuse which IMO are largely cultural and not due to automation)


> this is mostly due to gun violence, traffic accidents and drug abuse which IMO are largely cultural and not due to automation)

Not really. It's mostly due to food and metabolic disorders (i.e. diabetes, heart disease; iirc heart disease is the #1 cause of death in the US, and depending on how you group, diabetes is in the top five), which actually underscores the point. People are dying at higher rates because they can get cheap calories at a rate unprecedented in human history.



Both this and what I wrote can be true at the same time.


Again, are you looking at the average or the median? Just saying the same thing over and over without data to back it up doesn't count for much. The discrepancy is fairly large - average US 'familial' net worth is $748,000, median is only $122,000 - and the median net worth for non-homeowning families is only $6,000.

https://financebuzz.com/us-net-worth-statistics

A serious medical condition is also likely to cause bankrupty and loss of homeowner status for many homeowners in the mid-range for US citizens, which is unusual relative to other industrialized countries.

If you add in obvious stress factors - economic instability, environmental degradation, increasing climatic variability, the revival of infectious disease threats - then you can see why a dystopian future is a real probability, and that the status quo approach is unwise.


Is there a median child mortality or literacy rate?

For net-worth, which I never mentioned, your own source shows increases over time of both median and average since 2016.

All the distressing situations you describe of course are real, but were they less common in the US in the past?

QoL has gone up tremendously over the last fifty years, especially if you look at it globally. Do we need something like UBI to avoid terrible poverty for everyone? Absolutely! As we slowly move towards a post-scarcity society, re-distribution of wealth becomes more and more critical. That said, I'd rather be bottom 30% of income today than top 70% 50 years ago. No money in the world could have bought you minimally-invasive surgery, modern anesthesia...

Nice summary of all the ways things have gotten better: https://twitter.com/JosephPolitano/status/163837967499987353...


UBI? let's just start with reliable food, housing and medical(including mental health). and shore up some kind of retirement funds. then lets talk about cost effective daycare so raising a child doesn't destroy your financial prospects. If there's anything left I'm all about it.


I suggest UBI because I believe it's the most cost-effective way to allow people to get those things. Targeted intervention have often high overhead and undesirable side effects. For example, demand-side interventions to support limited goods and services drive up the price of these things while cost controls frequently lead to prices going actually up, due to the regulatory overhead. See https://www.niskanencenter.org/cost-disease-socialism-how-su...

Let's just raise the poverty floor instead and let people decide what to do with the money.


> If you look at any metric like life expectancy, literacy rates, child mortality, things have gotten so much better.

Indeed. But I was talking about drug pricing, not all of that.


I'll bet that a poor person today has cheaper access to better medicines than a poor person 30 years ago.


Yes, that's progress, lucky for some of us, we no longer live in a world of subsistence (living hand to mouth at the mercy of the world around us). The real question is, are we creating additional value i.e. is the value that we developing/building/creating compounding? (You'll see this in increasing GDP per capita and increased productivity) and what proportion of that are we taking? The amount we are taking for our efforts is deteriorating and inequality is widening, eventually it gets to a point where social mobility is halted and you're back to subsistence.


I know people below the poverty line who have access to cheap forms of insulin that didn't even exist fifty years ago because rich people funded the research

They wouldn't even be alive if your lie was true


> They wouldn't even be alive if your lie was true

I'm not lying. I admit I could be wrong (although I can bring up counterexamples to yours), but I was not intentionally attempting to deceive anyone.


> why not cut the employee's working hours in half while retaining the same salary

Because every C-suite wants to see YoY and MoM growth, every single year. An easy way to get that in your scenario his have the employees suddenly become 2x as productive with no extra hiring and minimal additional cost.


It's not only C-suites. It's also everyone of us because we like to see our retirement savings go up. Raising the floor needs to happen at a societal level through a coordinated effort like UBI. If a company were to decide to just give all productivity gains to its workers, that company would get no more funding and ultimately lose to its competitors.


That rarely happens, on the contrary, any imposed standards or regulations are simply outsourced to wherever those standards do not exist.


> If AI helps an employee finish what used to be a 12-hour job in just 6 hours, why not cut the employee's working hours in half while retaining the same salary, instead of giving the executive suite another million-dollar compensation boost?

Because that doesn't benefit the shareholders.


If you can not get your mind out of the employer/employee lie, then no progress can ever make anything better.

AI is a tool to help the employee get rid of the employer.


I'm not necessarily concerned that I will be entirely without a job because of AI. What I am concerned with is that the aspects of my job that I derive pleasure from will be replaced by AI.


So far (super early days, I know) the main thing I've used AI for at work is to write tests for code I'm working on. I get to do the fun bit, solving the problem space, and the AI gets to do the tedious bit (and do it faster than I would). I'm liking at least that level of AI integration.


This is the one that is most scary to me as a user of software. Have the AI write some of the code, sure. But tests have to be correct. Autogenerating a mountain of mostly-correct tests strikes me as a great way of ending up with surprising behavior that is a nightmare to untangle.


> But tests have to be correct.

From my experience, ChatGPT-3 has been pretty good at exercising all meaningful branches (I look at code coverage results, I don't care for percentages at all), in the least amount of tests, on the first go. I definitely have to modify each test quite a bit, because it frequently hallucinates API calls that don't exist, but the code that it produces is an incredible blueprint. And I haven't even attempted to use RCI yet: "improve your answer" or "your answer is wrong because..."; ChatGPT-4 is supposedly extremely adept at reflecting on its responses. I can only imagine where this will be in a few years.

I was about 6 months late to Copilot because I was incredibly skeptic about it, without having used it in anger. My skepticism was mostly (but not entirely) wrong. Having actually used ChatGPT in anger, I find the degree of skepticism extremely skeptical.

It's like picking up C in the 1970s. We're at the very beginning when things are pretty rough, but the skills that I am building today are going to be foundational in the future. If you're dismissing AI without giving it a few weeks to earn its keep, it's going to be rough to catch up when things improve to the point where it is required.


If you write a unit test for each branch in your method and just put in the current behavior as the expectation, all you've done is created a test that says "the method does what it currently does."


> the method does what it currently does.

It isn't that stupid, and that was done at least a decade ago with static+control flow analysis. As for AI, I was recently writing tests for a VT push parser in Rust (which is novel code, so no parroting here) and it clearly knew enough about VT to write a, correctly, failing test. I had a bug in my parser, and the test that AI generated found it.

At the end of the day, I'm not sure why anyone would believe the critique of someone who hasn't used a tool in earnest.


That's not necessarily true. For instance snapshot testing, although there may be edge cases missed, the cases cannot be correct or incorrect.

Or you could have the AI write test cases (writing them out is often the most laborious part) and then validate them by hand. That'd be little different than writing them yourself, though again edge cases may be missed. You just skip the un-fun part of repetitively typing out the code for each case.


Tests are easy to verify - if they aren't they are probably bad tests. Using AI for stuff that's easy to verify has been good so far.

Knowing what tests to write is another story.

My problem with AI generated tests is that they lead to over testing and it bogs you down once you have to do refactoring. Ideally detailed tests should come once you're on like 3rd iteration and really sure you've nailed the design (oh I hate TDD if it isn't obvious). With AI I'm getting detailed tests and first try implementations, bad code locked in everywere :(

More tests != better code, tests are still code - the less code you have to satisfy some goal the better.


I see the assumption that automation means it removes the human from the task. The article is precisely about a human is still intermediating. Writing tests doesn’t mean blindly accepting the test, it mean the boilerplate of the test is auto completed based on a human description, then a human verifies the correctness of the output. Will people take the shortcut? Of course. But AI didn’t create or magnify that problem. How many “return true” or incomplete tests or missing significant cases etc have you reviewed or stumbled across? Or worse; the test free code base?


I've done a few rounds of "here is a spec. Write tests for it." ... "Here are some tests. Write an implementation which passes them."

That gets me checking the tests in the middle so I can fix them up if I need to.


It seems that the more useful way to use AI would be TDD, where you write the tests and perhaps a simple implementation, and AI makes the implementation good/fast and points out inconsistency in the tests.


Of course I validate the tests. I get it to churn them out then go over them to make sure everything is sane and covered as expected. I'm surprised that people would assume otherwise.


For me, the alternative is no tests at all.


Having come from a background of very bespoke tests for systems level software where conventional unit testing just doesn't really work for 90% of cases, I have used ChatGPT to learn how to and quickly write a large test suite for a web-API project I am working on. While ChatGPT has been generally crap at producing anything I would consider committing to my repo, it has worked wonders as a rough guide and a motivation to getting the tests written. Can't be bothered to come up with test ideas? Ask ChatGPT, then I end up writing about twice as many tests anyway, just from getting that initial push.


This is important. Our relationship with those who pay is for labor is indeed “sold” as “do something you love and …” and frankly that does help people over time.

I love coding. Enough that I don’t want to age into being a director/manager, etc.

But what is important to understand is that from the other direction the owners of business only care about profit - they don’t care abt your love for parts of your work, except when they can gaslight and use it to keep you working. For them.

Remember when Dwight Scrute got 13 “employee of the month” awards in a single year because he got 2 in February in lieu of a raise?

That’s what I mean.

AI will be used by owners to either gaslight your love into using it for them, or possible remove what you love to be replaced by AI.

Which, for the record, is not “intelligent” any more than anything else in code.

Anything implemented is either to gaslight our love for this industry for profit, or to replace parts for profit.

It is no surprise at all.


You cannot in good conscience be wishing to do menial tasks for a living, that AI can do for you and instead of you.

That's how doing manual search and replace instead of search/replace in editor or computing stuff in spreadsheet by hand instead of using formulas.

If you think AI's output is suboptimal, right. Maybe find a way to use/train AI better to produce good enough output. But simply wishing to out-robot a robot is pointless.


I think the parent's point is the exact opposite of what you have argued against. The author is saying they are afraid 'robots' will come for the fun stuff, rather than only being a tool for the menial labour.

Think of the recent generative AI artwork...rather than drawing, and creating, (some/most?) artists fear just being prompt-carvers. The fun of the art has been replaced with menial labour of crafting prompts.


I wonder how "artsy" kids growing up today feel. Getting good at visual arts takes a lot of time and practice and it must be very discouraging when your best output is easily beaten by a guy typing "some flowers in a vase in the style of Van Gogh" into Dall-E.


As an actual meatspace maker of oil paintings - AI excites me.

Someone can make a print generated by AI - and it will be beautiful I am sure.

So when someone wants a physical panting with impasto texture and painterly strokes… the “real” painters will for sure still make bank.

I mean - not me: I have only panted for 8 years and still “suck” - but I love it.

But Thomas McKnight will still paint his beautiful scenes and people will still buy them for what they are: art.

AI art is art. Meatspace art is art. Heck: I code for money. CODE is art. Software is art.


You can't expect to be paid for fun stuff if it's done by robots better, until you are looking for artisanal craft career.

Artsy types doing art using basic tools such as brushes will be the same as handicrafted assembly in programming. Rarely needed and not much people are hired to do so.


Fearful of becoming a prompt carver? Let's be real, that is the least of their fears.

Fearful that a prompt carver can replace 10 artists is the more realistic fear here. That is the essential trajectory being plotted out by this technology and the trajectory described by the article.


Can you provide some examples of aspects of your work that you're concerned about?


I enjoy learning about the design philosophy behind an API, or diving into a library to learn the "off the beaten path" workflows. That's where the magic optimizations or shortcuts live.

So far, copilot just shows me the happiest path options for everything I've tried, and really falls over when it's not something that "Well this API looks like API_functionCall, so your query for <get X> must look like API_getX". It rarely does.


Slightly related - there was a Reddit post about a person lamenting the changes in their job. [0] They make 3D models and they greatly enjoyed making models from scratch and taking 2-3 weeks to do so.

Now with AI, they use tools to create a model - and then spend 2-3 days to update it.

This is a productivity improvement, but it also took away a key thing they loved about their job. There isn't really a solution here at all, and the poster wasn't looking for solutions either. Just venting.

We'll be seeing this more in the future, and I wouldn't be surprised if multiple people change careers over it.

[0]: https://news.ycombinator.com/item?id=35308498


Part of me can definitely sympathize with this feeling. Part of me thinks "lol our job is to provide business value using the tool that is software. Not 'getting enjoyment' out of manually twiddling bits." This is just a new layer of abstraction - I'm sure all the old school computer engineers felt slighted when the new software guys didnt even have to know how to solder flipflops and transitors together.


This is a nonsense headline. This is the equivalent of "Factory automation won't steal your job, companies that buy factory automation will." Democratization of AI tools will have upsides and downsides but non researchers weren't playing around with AI text/chat generation before this last round of simple to use tools came out.


My thought was it read exactly like "Guns don't kill people, people who squeeze the trigger kill people." Technically right in a specific context but completely misses the point.


Even if sceptic of AI are right, the gun/trigger analogy will still apply - because in short term those still big group who will embrace AI are those people that starts pulling the triggers. They will have adventage over those that prefer to stay samurai and not touch those guns.


What point does it miss?


The degree to which it enables undesirable behavior. A disturbed guy may want to murder everybody in a nightclub, but his chances of success go way up when he has a gun vs. a knife. A company owner may want to automate your job away, but without AI the automation is too expensive and unreliable.

The big concern with AI isn't so much the Terminator future, but the Cyberpunk future, where all wealth is concentrated at the top and economic mobility is stopped. A dysfunctional feudal system that is impossible to reform because the small handful of people who own everything also control the politics and the police.


Sounds like you took the headline to mean "There's no concern about AI", but in fact it explicitly highlights an important concern for many: if you fail to keep up, you will lose your job.

I am puzzled so many here seem to have not read the second half of the headline before commenting that the headline missed "the point". The point is in the second half.


You are still missing the point.

It currently requires 100 people with your job to get all of the work done. With AI this can be reduced to 1 person. All you serfs get to fight over who gets to be the lucky 1%er.

In the past when faced with a situation like this the solution is to find a different job entirely, but every time this repeats finding a different job becomes even more difficult. Eventually we have to start reconsidering how we have structured society, but this would ultimately be a loss of power and prestige for the current winners in society and since they have excess influence in politics it isn't going to happen.


If you keep up you may lose your job anyways, as productivity increases due to ai may just mean fewer jobs. Thats the point that is made about AI taking jobs in general - this person seems to reference it in the headline but then dives into a different unrelated argument so is basically clickbaiting.


People willing to replace workers with lower cost options always existed; however, AI is an enabling factor.

People willing to murder always existed; however, guns are an enabling factor.

As I understanding, the parent's argument is that focusing on the person willing to do the action is not useful because they were always there. What has changed is the enabling factor.


I would argue that the point is that AI WILL steal lots of jobs (directly or indirectly) and the directness of that theft is fairly irrelevant. :shrug:


It seems like you took the headline to mean something like "You don't need to worry about losing your job because of AI". That would make sense if you only read the first half of it. But the second half completely changes the meaning - it's saying you definitely do need to worry about losing your job because of AI (and maybe you should start taking action to avoid that fate). So I think you not only missed the point but interpreted it backwards.


Exactly, the headline should be "AI WILL steal your job!". How it will steal it is irrelevant to one's loss of job.


I think there is a big concern that this will destroy the middle class.


Middle class got hollowed out a long time before this came along.

Automation was always picked as a good scapegoat for the declining fortunes of the middle class because the inevitable forward march of progress is immune from pitchforks while oligarchs, landlords and bought politicians are not.


The headline is a pithy statement that those who ignore AI will be left behind, just as those who ignore automation were left behind. I don't think it means what you seem to think it means.


Ironically, the advice from some circles for those left behind by automation was "learn to code".


Which was good advice, and I would argue still is. But even if you think it is no longer good advice, would you at least accept it was good advice for a very long time, leading to a lot of lucrative careers?


I think the message here is that you are in control of what happens to you next. In this regard the headline and the post are fine. Well, unless you subscribe to the other philosophy camp (determinism).

The message is not on whether AI will change the jobs or not, of course it will. It may eliminate some, reduce others, change most, and likely create a few new ones instead. The point is about how people chose to react to it, and specifically that "putting the head under the sand" and crying later that AI took the jobs away is not going to help; accepting that the world changes and that you can do something about it and then leveraging it will.

Is it possible that some specific jobs disappear in several years from now, just like it was the case with factories, automation, etc.? Yes indeed it can. They may be taken by people leveraging AI (your boss, your company, your colleague). You can ignore the fact, or accept it and learn to leverage the same tools for your benefit, or better - the benefit of the society.

Lots of "analogies" are given in various threads here, like "Guns don't kill people, people who squeeze the trigger kill people" and "It is not the fall that kills you, but the landing". I do not agree. Sure you can argue any of the two sides, that's the whole point of these statements. They just serve to focus your argument on one side or another.


I think it’s absurd as well. Jobs won’t be stolen, different jobs will be created. At a certain point we will move past scarcity, and maybe we won’t have to labor to not die. What’s so important about a job? Humans need meaning, but meaning doesn’t have to be delivered by PayChex at the expense of the human soul.


>At a certain point we will move past scarcity, and maybe we won’t have to labor to not die.

Society seems to be globally showing very decisively that it wont allow that to happen. Productivity has already increased massively. Society could have already used that productivity to decrease the hours that each person has to work to survive. Instead, society's structure pushes those profits to the owners of the businesses that run the technology.

'Working class people must work full time' is still such a fundamental part of culture that it is literally the Federal Reserve's mandate to maximise employment.


Ultimately the whole number of jobs will decrease, in the same way that cars didn't create new jobs for horses.

I'm definitely not opposed to post scarcity. I do wonder how you feel about the replacement of social "jobs" in the world. Models like GPT roleplay a girlfriend, parent, colleague at least as well as it roleplays a developer or a scriptwriter. I don't think many people have those consequences in mind yet.


Not having to labor is attractive to someone who has a solid foundation of accumulated wealth - at least a house with a garden in a first world country inherited from parents.

For the rest of us skilled labor was (and still is for a while) a means to accumulate some wealth, get an own place to live, move to a different country we want to live in, get some things like travelling an hobbies that are not strictly necessary, but pleasant to have. If labor doesn't serve this purpose any more, it's unlikely that something else will, not in our lifetime at least


It’s a fair point - health care isn’t the only thing holding us in bondage. I’d note that I think humans will still work, but it’ll be more directed towards causes (we see this emerging as a generational demand now that work be meaningful) that talk to the human. I think, and now we are really ranging afield, that we already see as society’s move to post scarcity reproduction rates drop materially and some of these issues around “where to live” become less crushing. Additionally, as the need to be proximate to a human hamster wheel containment facility (office) we can utilize the broader infrastructure for our bag of mostly carbon water home and garden.

I think the issue will be less about whether it’s possible or whether we won’t get what we want, but more about breaking down norms and inertia and biases and mental constraints and the need for accountants to keep us in hamster wheels for their moral view of the world.


I'm not as worried about AI "stealing" my job as I am worried that it's going to make my job unappealing.

As a software engineer, what motivates me most is the creative aspect of it. I like building and designing things. I like thinking about every line of code that I write. If people start outsourcing their coding to generative AI, and if I find myself feeling pressured to follow suit in order to keep up, then everything I enjoy about coding goes away.


I completely agree. Many of the recent articles and blog posts written by AI enthusiasts giving examples of how AI makes their job "easier" or "more productive" just demonstrates this to me even further. What they call "more productive" looks to me like "incredibly boring".

In addition, another point I don't often see mentioned is the alarmingly likely possibility that all AI-enhanced software engineering will go through these tools that are owned by 1 or 2 (American) tech companies. Are people really excited about a future where you can't even perform the basics of your skillset without depending on a Microsoft service? We are rapidly advancing toward a future where a single company owns the most popular text editor, the most popular code hosting platform, (one of) the most used cloud infrastructure platforms, and soon, if the AI enthusiasts have it right, the tool that will be a "necessity" to even write code at all.

If that future unfolds, I will leave software voluntarily long before I get "replaced".


Emacs will always be here.


It depeneds on what you are working on and your position. Isn't it a lot of code is repitative? Test cases are mostly repitative. AI can help you with that and your productivity increase significantly.

I write code/program mostly to make my life easier and well I like to write to code. And AI/copilot just helps me write the better code faster.


> And AI/copilot just helps me write the better code faster.

Consider that AI, by necessity, has to be trained on an aggregate of human output. That means that, by definition, the aggregate amount of data that it's trained on will be representative of the "average" solution to any problem. This doesn't mean that I can always do better, but I wouldn't assume that what AI will give me would be "good." Most developers I speak with who are using CoPilot and ChatGPT tell me that they always examine the output and have to "massage" it a little. But I don't see it jiving with my workflow, specifically.

In any case, if people find it useful then great! What works for others will not always be what works for me and vice versa.


The article can't even spell "ChatGPT" right! And randomly makes words plural like "white collars job" and "a sure things."

The author says they are a ghost writer and is confident they can do a better job than AI, and according to their bio they have written 100+ papers... I wouldn't be so confident that they are better than AI and writing stuff like this if this is their best work!


There are two types of people:

1. People afraid AI will take their jobs

2. People using AI to take people's jobs


3. People using AI to take other people’s jobs, because they’re afraid AI will take their jobs.


I'm in this boat. I was worried about AI for a while then I realized I have to figure out how to use it or get left behind.

The problem is I am still having difficulty understanding how to use the tool effectively. Any advice would be much appreciated.


I think that's a good approach, and I've also taken it. I was skeptic but now I'm evaluating its strengths and weaknesses.

But also keep in mind there's lots of hype and lots of productivity gurus praising it as the greatest thing, instilling FMO on others.


My late-January 2023 prediction, perhaps mimicking the still-then-growing trend:

2023 will be the year that leaves the majority of humans behind.


4. People downplaying AI and ignoring AI completely.


The language is all wrong here. You don't own your job. You are not entitled to perpetual payment for whatever it is you are doing. A job is not a possession. You are not being robbed of your job. You simply do labor in exchange for money based on the value of that labor. And sometimes you do that under some long term arrangement with an employer and sometimes based on more short term arrangements. Either way, the arrangement is entirely premised on what you do having value and being worth paying for. So, if the value of what you do changes, you need to adapt. It's not being stolen from you because you never owned it to begin with.

The threat with AI for a lot of people is that what they do right now becomes less valuable. And people don't like change and feel threatened by it. Generally there are two solutions to that problem:

1) do something else that is more valuable or more worthy of your time.

2) figure out a way to scale what you do. For example by leveraging AI.

For many people it might be a combination of the two. And for some people it will indeed mean that they need to figure out something else to do.

But we'll still need to have an economy that is based on people earning some kind of income and then spending that income. The whole system grinds to a halt without that. If AI eliminates all the jobs, nobody would earn anything and be unable to spend. So, that's unlikely to happen. Economies are self regulating in the sense that people optimize for value.

In the end AIs are just tools that people use to create value for each other. If an AI can do everything by itself at a very low cost that just means that what it does has low value and is not something that people will spend a lot of money on. Meaning that other things will emerge that they will spend their money on that are more valuable. Those things are probably going to involve human activity that is scarce in some way.


This ignores the whole human part of a job.

Many people are wage slaves: if they're job is gone, they have a big trouble. That's why they're scared of AI, that's why so many news count the potential jobs cuts: more fear means more clicks.

There's also a lot of inequality, and AIs will push for a bigger gap, not smaller. If everyone is 20% more productive, does everyone get a 20% raise? Historically, no: the company gets the extra profit.

So there's more to discuss than free market 101s, which don't capture the whole complexity of the world.


3. People who may want to try out AI to see if they're 1 or 2 but don't want to give ChatGPT their email and phone number to do so. Also, are people uploading their company's private code to ChatGPT to get its help? Wasn't there a big issue with ChatGPT leaking info recently?


3. Virtual people. AI pretending to be human to prevent humans from taking their job.


Don't forget about humans pretending to be an AI so that other humans will be impressed with how far AI has advanced: https://www.forbes.com/sites/cognitiveworld/2020/04/04/artif...


Or people just have no choice using AI baked into their once old tools


Corporations are people.


3rd type of people: AGI.


It's a timescale, like all technological progress. Or even evolution.

So you're working in a games development studio as a visual artist.

At first you're producing renderings in Blender or whatever.

Then, AI is able to generate three-dimensional meshes for you. You are in the loop, deciding that these meshes are suitable, which textures work, which are the best, designing the prompts, etc. This is 3D midjourney.

At this point, you're now able to produce far more more quickly and it's likely that fewer artists are required within your company.

Later, AI is able to more accurately decide which models will be preferred by the higher ups, or by the playerbase, better than you are as an artist. The AI has now replaced you as an artist. The software team or management simply ask the AI - "please produce me a set of fantasy character models for my game, with these characteristics". This is essentially GPT mixed with 3D midjourney.

I'd say we're not far off, currently, from this point.

Beyond that, perhaps the AI is then able to produce game rules, visual imagery, game engines etc than your company. The prompt "produce a doom clone with portals" results in a higher quality product than a team of humans working on it.

We don't quite have that yet.

Beyond that, the AI may be able to determine which games will be valued by the market and thus what is worth spending effort on. It could end up outcompeting all game studios.

Move further and it may produce something which just up-ends the game industry entirely by producing a product which gamers prefer to spend their time on.

I don't think that people quite grok the idea of what human-level or beyond AI actually is. By definition superhuman AI is more able than you are. The question is whether we will actually end up with it; that it will outcompete us if we create it is tautological.


> The AI has now replaced you as an artist. The software team or management simply ask the AI - "please produce me a set of fantasy character models for my game, with these characteristics". This is essentially GPT mixed with 3D midjourney.

then about 6 months in people start noticing that every game looks exactly the same


... but how much will they care? A huge chunk of games on Steam are now running on Unity and feel fundamentally the same; that hasn't stopped people using the engine.

Do we play games for novel look and sound or novel experience or novel story? The answer is "All of the above."


novelty is probably the number one thing people like


Why?


With the progression of the technology, you may find there are not so many "in-between" steps as you expect in your current list. It will quickly go from the current stage "employee being more productive" to "steam, I want to play a game where <insert prompt here>". No need to generate art assets or have anything resembling a game engine, just pure interactive "imagination".


> No need to generate art assets or have anything resembling a game engine, just pure interactive "imagination".

In a way it's the death of imagination because the search space is limited to what has come before. Granted, large portions of the game industry have already given up on making anything really new.


I disagree I think it will take more steps. Likely there will be phases where we learn to let the machine 'drive' so to speak with our business. I think there will be a lot of independant studios using AI with very low funding showing it's possible. However I think the big block studios are going to be very careful to rock the boat. There will be business decisions in between these steps. If anything I think AI will make things like games, music, art commodities on the value of paper napkins. :(


If humans still exist, those things will still hold value. I can't say for sure humans will survive in the same quantities once all things settle, but let's imagine we are in a future where deep learning has matured and netflix, steam, facebook, etc are all now just "prompt terminals" that output your request.

Then, with infinite variations, there will be infinite duds. People will value the ones that aren't duds so there will be a market for curators. And there will be a niche where people will want to see the exact same output together and talk about it.

Can you imagine facebook ? "Facebook, please give me a friend that <prompt here>"


I agree.

My belief is that in order to communicate my intuition about this it helps to provide discrete steps which explain the transition.


This is like saying tractors won't steal agricultural hand-ploughing jobs, the people building the tractors will. Technically true. But... stupid.


It's the people driving the tractors who take jobs, not the ones building them. If you operate a hand-plough for a living when tractors come on the scene, you'd better learn how to drive a tractor. The same is true of AI.


Is it stupid? Because people are railing against the AI, not the system of evaluating worth and distributing wealth that implies that the advent of a miraculous labor-saving technology should be met with fear, not excitement about expansion of the capabilities of every human being.


I think this is right on. AI would be an amazing if it didn't mean that some people will have an even harder time making ends meet. Why does every technological innovation that improves productivity end up shifting even more resources to those at the top of the ladder?


The factory worker bemoans his station as the new automated smelter they're going to install may run him out of a job. Before he goes upstairs to commiserate with his spouse, he throws his dirty clothes in the washer, nary a care paid to the previous generation's neighbor children who used to make all their money washing other family's clothes for them.

(Probably not too much thought should be paid to the fact that when that economic system collapsed, we instituted allowances for children).


Tractors and policy favoring bigger scale agricultural production.

The US is profoundly inefficient at utilizing land for agriculture. The “miracle of the desert” and emptying a giant, un-renewable aquifer has been carrying us for decades. In my lifetime, we’ll be relying on small garden plots just like the Soviets did for vegetables.


It freed them from doing something so brutal, I’m a half glass full guy, they are now upgrading their skills to drive tractors


I suppose HN is unwilling to admit to itself of the obvious societal demerits they are enabling. Lying to oneself is I suppose very human, and exactly what you would expect of someone doing something unethical. The same thing happened with crypto.


It’s the tech equivalent of “guns don’t kill people, people do” argument. Pretty inane.

This stuff is going to nuke thousands of knowledge workers, either directly or via services. Is that a bad thing for society? Sure. Does that matter, no.


I for one can't wait for more contentless boilerplate to appear in every email, pull request, "customer service" conversation, etc


Hi blibble, I notice you wrote an insightful comment so I decided to upvote it for your pleasure. Keep it up and make sure to follow me on all my social media.

Sent from my GPT4 assistant


It's like inverse compression:

Employee: Please expand these three bullet points into an email to my manager.

Manager: Please summarize this email from my direct report in three bullet points.


Well it isn't happening today. Just spent 20 minutes of my life chasing a GPT hallucination. No doubt that it will be more accurate sometime in the future but right now I'm faster without it.


I hate this take.

I have no idea why some people believe that in the future years of experience in a field like software engineering will no longer be marketable skill, but typing some stuff into a text box will be.

It's like an elevator operator arguing that automation of elevators will make their jobs easier overlooking the fact that if literally anyone can do the job then they're probably not going to be needed.

I'm not saying there won't be people in the future typing commands into an input field either, but the point is that this doesn't take any real experience and if it is a job it will be paid extremely poorly. Yes, in the present it might make you more productive compared to your co-workers, but in a few years don't be surprised if other people learn they can also write text into an input field and they might just decide to do it themselves.


I think this is right. There is no AI in the foreseeable future that would be able to understand a complex business, prioritise and roadmap some future strategy, and convert that into business requirements detailed enough to implement as working software, architect a proper solution, and then work with an AI to develop and verify the software through an incremental development process. But I do believe that there are some people who could work with GPT 6 or 7 to achieve that. And those people would be productive enough to replace a whole team of product owner, architect, designer, developers, etc. and will therefore become very rich.


You can outsource everything right now to some dirt cheap offshore team. I doubt, that outsourcing alone will make you very rich.


Outsourcing to a team of cheap offshore developers has immense coordination costs for whoever is doing the outsourcing: communication, clarification verification, etc. Outsourcing to an AI has virtually none: feedback is instant, you can check understanding easily, you can specify exactly what you want to get back. And the costs are dramatically lower than even the cheapest of outsourcers. Whoever can orchestrate the activity of an AI will capture nearly all of the value.


Everyone’s worried about AI taking their software jobs, yet humanity faces existential problems around climate change, poverty, failed states, etc. And huge opportunities like space exploration and better health care.

Maybe this has more to do with our ego - losing perceived status as masters of the universe, when in reality hard, intractable problems exist more than ever. Why aren’t we changing to work on them?

Maybe the cost of software development has been actually a barrier to solving many of these problems, and society may be better off without that cost?

Are we the baddies?


I think that ego/status is a big part of it. And the loss aversion bias. Our thought process is "This thing is going to be able to do software as well as a human, therefor it is going to take my job." When a more optimistic and probably more realistic take is "This thing is going to do the boring parts of software, so I can spend my time building out the more interesting, big-picture parts of code. I'm going to be so much more productive." But the fear of loss is so much more distracting than the potential gain. ... And what is it that we fear losing? Status! Software development is in high demand, it's a sought after role, we feel "special".


My unpopular opinion is that people adopting this type of machine learning will have some benefits and some speed-bumps and obstacles. As GPT iterates through a myriad of versions, improvements, restrictions, filters businesses will have to work around legal challenges, shadow biases, confidently incorrect interpretations and of course dependencies on a rate limited cloud service unless all future iterations become open source, self hosted and distributed and everyone can agree on what data-sets to ingest and what should be filtered/excluded for legal, accuracy and other reasons. Businesses taking this on early will need to be ready to quickly change course on the fly especially as new legislation catches up.

I think the biggest obstacle could be the lack of showing ones work when legal issues will arise. There isn't a "debug last answer" to get forensic data on how the answer what achieved. I am curious how a cranky judge may respond to "because the AI said so".

Another potential risk could be if shadow biases dynamic filtering, dynamic algorithms, dynamic tuning based on social, economic or political preferences of the ML operator, get too aggressive and people start to realize their financial decisions are being manipulated and impacted artificially by automation even more so than occurs today on social media platforms. I do not know how businesses or governments may react to this.


I've already stolen 4 jobs for myself. I was getting ready to hire a team for my project, but now I don't have to. However... I didn't have money to hire that team in the first place, so the project was actually stalled to the point of collapse, and those 4 jobs never would have existed.

Now with AI I have enough capacity myself to push it further, and so perhaps I'll get to the point where I can actually hire people to work on the project. In that sense, AI will be responsible for those jobs being created.


Can we please adjust the framing about jobs? Another employee will not steal your job.

Your employer may “steal” your job and give your job to someone who convinces them they can do it cheaper.

In the new era of ai-augmented white collar work this process will get faster.

But let’s remember it’s always the employer who makes the decision and stop using language implying the existence of “job thieves” of some kind and completely excluding the employers’ choice in the process.


I don't care how we frame it so long as the framing is consistent. What really gets me itching to fire up the guillotine is when people flip flop between stealing and progress based on who's doing it and who it's happening to.


What's this constant obsession with "stealing jobs"? The tech industry has been dead for something like 30 years. There is zero intellectual design being put into it. Music is just a bunch of sellouts. You know how "content creators" do that cringy shit where they try to base their life around monetizing their "content"? That's how "musicians" worked before the internet. I couldn't care less how their jobs are threatened. Journalists are obviously no better. I don't miss Blockbuster. I don't miss PSTN operators. I don't miss building pyramids by hand.

Work / money itself is an absurd concept and not something to spend 100% of our time worrying about. As technological capability increases, there will just be a point where food and everything will be handed out for free. And mundane people will still be worse off by their own choosing. And perhaps some shoddy company like Amazon Google Apple etc will be the one doing this so they will use this to enforce their political viewpoint instead of honoring the non aggression principle.


> As technological capability increases, there will just be a point where food and everything will be handed out for free.

that's not the trend line I'm seeing.


Most jobs are worthless and shouldn't be artificially preserved. The end game of AI is just doing nothing and still having a roof and food. It's not like we're going to have human manual labor picking apples. At which point all of these people who sold out and made their what would otherwise be an intellectual job into something profitable, will have no value.


yeah, no way "food will be handed out for free" when you can make a profit by selling it; they'd rather throw it away (as they do right now) than give it away for free


At some point you just have everything you need to survive automated. At that point there is no such thing as profit anymore aside from hookers. The only viable way I see someone hoarding it is because they see themselves as the new god because they were the first to automate the full chain. In that case they will claim humans are bad and try to police them like what we get with every single wingnut idiot these days.


Work/money is accounting. Work feels bad, money feels good: now go maximize you replicator.

> And mundane people will still be worse off by their own choosing.

As they always have. Crying about the environment we all live in and co-create is myopia transformed into arrogance.


Contrary conjecture:

As with many "productivity enhancements", these AIs[0] will streamline some work to the point of practical elimination, but will in the end make more work, by enabling higher-level tasks to be doable instead of just imagined.

It seems a lot of the missing "productivity enhancements" from the PC and internet revolutions don't appear because people didn't net get fired, they just started doing different tasks (TBD whether those are actually useful, but evidently most managers think they are). Oblig car analogy - more highways induce demand for travel by making it easier, so the net is that traffic is worse; similarly, this may induce demand for work.

[0] these are artificial, but not intelligent; it is predictive models with insane amounts of ingested data. Whether text, code, or images, it speeds production in middle-of-the-road cases, but cannot figure out what to produce. There's no more evidence of reasoning than that an image in a mirror is evidence of skilled kinesiology. With an actual reasoning AGI, the conjecture would be quite different.


Cop out.

You can replace AI with X where X is significant climate changer and it will be always true.

What AIGC is capable today will displace a large amount of workforce and many of them won’t be able to keep up the change(be it age, skills that were valuable but now nullified by AIGC).

Chaos is a ladder but that doesn’t justify the pretenses of Darwinism.


If there are 100 manual jobs today and 80 AI-assisted jobs tomorrow then AI has "stolen" 20 jobs. There's really no other way to frame it. The added productivity gains are going into someone's pocket, and if you are in the labor class then that someone isn't you.


AI is yet another technology that can help you escape the labor class.


Surprisingly, there is no apparent discussion here about the fact that this argument parrots, precisely, the rightly-derided original, "guns don't kill people, people do."

There is an interesting way in which this is different, though, which is that the current lack of agency by AI is going to prove a very short window, most of all for the sort of agency of interest here, "who" is deciding which jobs to cut.

Anyone with a loan or insurance denied on the basis of "what the computer said" would rightly already contend that there is sufficient agency to yes, steal your job.


"Guns don't kill people. People who use guns kill people"


It's not the fall that kills you, it's the landing


It sure will widen the already existing gap drastically.


I am finding the combination of LLMs and strongly-typed functional programming to be incredibly potent.

By building programs out of small, single-purpose pure functions, it is much easier to review the output of the LLM, whilst also giving me the ability to grok and steer the overall direction of the code. A traditional type-checker catches many of the looks-right-but-is-subtly-wrong hallucinations too.


Interesting anecdote: I was interviewing a full stack developer, and I asked him to build a simple react UI. He was allowed to use google, but couldn't come up with a solution. He then said if he had access to chatGPT, he could've solved it..We didn't hire him, but should that be an expectation? Can you use chatGPT on an interview?


Kinda disagree, AI will be eventually be built into all tools where appropriate and you will use it either you like it or not. Simple example: Photoshop. You may not know it but it uses AI. With ChatGPT etc, it will be in more tools. So relax, most people won’t need to lift a finger to leverage AI, it’s built in and is by default used


I think it is important for the context to notice that the post is written by a renowned Italian mathematician and a computer scientist, and Italy just banned ChatGPT. I feel that the post is not about the AI (people using AI) taking the jobs, but about whether it is a good idea to blindly ignore the fact.


I'm seeing a lot of AI doomerism. I'm incredibly optimistic about AI, recently ChatGTP helped me debug a Neovim/Microsoft Language Server bug related to symlinks - I couldn't have done it on my own but asking the right questions helped me tease out a production bug related with gopls.


"Computers will not steal your job. People leveraging computers will steal your job."

—1963, 1973, 1983, 1993...2023


No shit sherlock.

"Video streaming won't hurt DVD sales, people preferring video streaming over DVDs will."

"Online news won't hurt paper print, publishers publishing online instead of in print will."

"Guns won't hurt you, people shooting you with guns will."


Reassuring read from someone unable to consistently write "ChatGPT" vs "ChatGTP".


I've been using GPT-4 to translate my app into new languages. Saved me thousands of dollars in the past few weeks.

I used to pay translators for that, but GPT-4 translation is so good. I'd argue domain specific even better than a translator.


How do you know the translations are accurate if you don't speak the native language?


I compared it to past translations the translators did and also my native language. That's how I gained the confidence it'd do just as well with new languages.


Seems like a pedantic argument. Obviously there is always a human behind it unless it's sentient and can make its own decisions, at which point there are much bigger ethical considerations to discuss first.


But that's the point of stating it, isn't it? The people like Yudkowski or Musk who warn against AI are typically worried about the later scenario, not about "it will take jobs".


Unless we expect a grassroots AI luddite movement, this is hair splitting.


With the cost of living in the US being completely out of control, and the potential for AI to gut wages in a number of high paid areas, why are we not expecting a new luddite movement?

Or is this another case of not understanding why the luddites broke the machines in the first place?


People only think of the Luddites as stubborn and trying to hold back "progress" and not as the entire class of people whose life's work, passion, personal identity, and means of supporting themselves, was stripped away with no replacement.

If you spent decades polishing your skills, took pride in your work and being a known expert and someone others could rely on, and made a comfortable living for yourself and your family doing it, it was would be horrible and devastating to watch it all slip through your fingers like sand. They weren't even given the comfort of still having a good income and comfortable life. The money they would have made for their hard earned work and skills was siphoned off by distant people in suits and jewelery who never had to put in a drop of sweat. They instead received such mockery that their name itself is now a derogatory term.


Most of people don't work in these high paid areas, and there is no revolution in sight. Being newly impoverished is less pleasant than being poor, and we can expect some accidents there, but eventually people will probably adapt to the new normal.


Americans are too divided to organize against something like this in any way.


play hype games, get hype prizes


And realize that as workers become more productive, wages will not increase. More wealth will be created per unit of time, but that wealth will likely not be realized by the average worker.


So assuming the same output is required, if people wielding AI are more productive, you'll need less people.

.

But we're missing an important point, as far as software development or at least web development/user-centered apps goes at least, if not many other situations; We will need less software, not more, and the software we will need will be much more infrastructure and 'industrial'.

For example, currently I might go to a units conversion website to do a quick cooking calculation. chatGPT.

I go to a weather website (always via google, because I never remember the website names) to check the weather. chatGPT.

I want some information on my stocks, and their outlook. chatGPT - can summarize the price movements and the 'buzz' around said stocks without wading through pages of web UI.

I want to book a holiday. Instead of wading through ad- and promotion- laden booking sites, chatGPT.

The authors example of 'writing articles'. Why bother reading them? Unless it's reading for pleasure, do I want to seek out a particular publication and wade through a load of fluff to get the meat of what I want to inform myself about? No, just ask chatGPT.

Sure it's going to have to learn to present results neatly in tables and pictures and so on, and it's going to have to use plugins to get actual answers to some things. And I don't want to have to type at it all the time, so I'll want some 'promptmarks' or something more sophisticated.

But all these websites that we're worried GPT will write for us, unemploying developers - it's 'worse' than that - the sites won't be needed to start with.

As for other stuff, say news, why bother getting chatGPT to write a styory for a page on NYT or whatever. I can just ask GPT 'whats up' and it'll tell me news.

Sure for all this, it's not quite there yet, but I think the writing is on the chat-box, er, wall.

.

The other main issue being missed is capitalism. We've enjoyed a golden age of creation where basically all you need is a laptop and some open-source software, and you can code, or write articles, draw artwoks or whatever. Albeit you have to use infratructure to distribute that code or make it available online etc.

With AI, the means of production will suddenly change from a laptop, to multi-million dollar models, owned by someone else. That could have a profound effect on who gets to create, and under what conditions.


1 person leveraging AI will steal 10 jobs.

It’s not a 1 to 1 replacement.


AI won't steal your job, it will legally take it.


Everyone that knows what they are talking about have already been saying this. This is what the narrative actually is. Not AI replacing humans.


strange wording

> It is not AI that will end up stealing your job,

> it will be stolen by people that have learnt to use AI to become more productive

This is by definition AI stealing your job. AI will not come and say: hey move on, I will do this job, everything is accomplished indirectly.

With same logic you can say, there are no bad politicians, it's people who are following their advice are bad


"AI don't take jobs from people, people take jobs from people."

We hear this kind of argument from gun lovers all the time.


Ah, the old "guns don't kill people" argument.

That will distract us all from the dangers of weaponized AI.


Why should we listen to someone who cannot even spell "ChatGPT" right for opinions on AI?


Instead of OmniTron3000 stealing your job, it will be Joe AND OmnioTron3000 stealing your job.


But everyone knows this right? It’s been happening now for months and the fallout has been interesting. The people who refuse to use it are either the top of something (money/iq/eq etc) or will not make it (it is the current status and job and income level) as they can trivially be replaced soon (and now).


I'd love if AI automated the 'talking to people' part of my job for me


This is a useful distinction, I think, to make: the current hype cycle and headlines are making it sound like we have invented autonomous, intelligent beings with super-human abilities of their own.

"AI," is such a poor, non-descriptive term that it allows charlatans and click-bait to proliferate and scare non-technical, uninformed people. They can bend their claims any way they want.

The people using these technologies under the banner of "AI" definitely want you to believe it's a revolutionary, fundamental, paradigm-shifting technology that will utterly change society: and they're the only ones that can bring it to you.

This isn't a public, social project for the good of society. We're seeing classic capitalist moves to make first-mover advantage to leverage this technology.

Too much of this hype-cycle shares the same characteristics of the crypto-bubble from last year: it's revolutionary and inevitable and will completely change society!

We need to educate people on what these technologies actually are and not let people sell the public on speculation and science-fiction "what ifs."


AI won't steal your scrolling, this website will


Yeah, the billionaires who have armies of AI bots.

Eventually, anyways.


Stopped reading when I got to “ChatGTP” tbh.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: