Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Burnout because of ChatGPT?
163 points by smdz on Aug 14, 2023 | hide | past | favorite | 192 comments
TL;DR (summarised by ChatGPT) - I'm experiencing increased productivity and independence with ChatGPT but grappling with challenges such as lack of work-life boundaries and overwhelming information, leading to stress and burnout.

Long story...

I have been using ChatGPT for a while, and moved to the Plus subscription for their GPT-4 model, which I must say, is quite good.

1. ChatGPT makes us very productive. Personally, in my early 40s, I feel my brain is back in 20s.

2. I no longer feel the need to hire juniors. This is a short-term positive and maybe a long-term negative. [[EDIT: I may have implied a wrong meaning. To clarify - nobody's going yet because of ChatGPT. It is just raising the bar high and higher. What took me years to learn, this thing can do already and much more. And I cannot predict the financial future of OpenAI or the markets in general.]]

A lot of stuff I used to delegate to fellow humans are now being delegated to ChatGPT. And I can get the results immediately and at any time I want. I agree that it cannot operate on its own. I still need to review and correct things. I have do that even when working with other humans. The only difference is that I can start trusting a human to improve, but I cannot expect ChatGPT to do so. Not that it is incapable, but because it is restricted by OpenAI.

And I have gotten better at using it. Calling myself a prompt-engineer sounds weird.

With all the good, I am now experiencing the cons, stress and burnout:

1. Humans work 9-5 (or some schedule), but ChatGPT is available always and works instantly. Now, when I have some idea I want to try out - I start working on it immediately with the help of AI. Earlier I just used to put a note in the todo-list and stash it for the next day.

2. The outputs with ChatGPT are so fast, that my "review load" is too high. At times it feels like we are working for ChatGPT and not the other way around.

3. ChatGPT has the habit of throwing new knowledge back at you. Google does that too, but this feels 10x of Google. Sometimes it is overwhelming. Good thing is we learn a lot, bad thing is that if often slows down our decision making.

4. I tried to put a schedule to use it - but when everybody has access to this tech, I have a genuine fear of missing out.

5. I have zero doubt that AI is setting the bar high, and it is going to take away a ton of average-joe desk jobs. GPT-4 itself is quite capable and organisations are yet to embrace it.

And not the least, it makes me worry - what lies with the future models. I am not a layman when it comes to AI/ML - have worked with it until the past few years in the pre-GPT era.

Has anybody experienced these issues? And how do you deal with those?

* I could not resist asking ChatGPT the above - couple of strategies it told me were to "Seek Support from Others" and "Participating in discussions or groups focused on ethical AI". *




> 2. I no longer feel the need to hire juniors. This is a short-term positive and maybe a long-term negative.

> A lot of stuff I used to delegate to fellow humans are now being delegated to ChatGPT. And I can get the results immediately and at any time I want. I agree that it cannot operate on its own. I still need to review and correct things. I have do that even when working with other humans. The only difference is that I can start trusting a human to improve, but I cannot expect ChatGPT to do so. Not that it is incapable, but because it is restricted by OpenAI.

I think this point bears repeating.

The threat of these models isn't that they'll go all Skynet and kill everyone, it's that they'll cause a lot of economic devastation to people who make a living through labor requiring skill and knowledge, especially future generations of skilled labor. Then there will be a decision point: either the senior-level people who thought they were safe get replaced by a more-advanced model, or they don't and there's a future society-level shortage because the pipeline to produce more senior-level people has been shut down (like the OP is doing).

The only people who will come out (relatively) unscathed are the ownership class, like always.

Of course, this is inevitable because it's impossible to question or change our society's ideological assumptions. They must be played out until they utterly destroy society.


> Then there will be a decision point: either the senior-level people who thought they were safe get replaced by a more-advanced model, or they don't and there's a future society-level shortage because the pipeline to produce more senior-level people has been shut down (like the OP is doing).

Or, for every junior that isn't hired by a business that can't expand its portfolio to exploit greater productivity or can’t figure out how to effectively use LLMs across the experience spectrum, two will be hired in shops that can do those things, and, as with previous software dev productivity increases, greater productivity in the field will mean a broader range of viable applications and more total jobs across all experience levels.


>Or, for every junior that isn't hired by a business that can't expand its portfolio to exploit greater productivity or can’t figure out how to effectively use LLMs across the experience spectrum, two will be hired in shops that can do those things

And everybody also gets a pony! Win-win-win situation!

Previous "software dev productivity increases" happened as computing saturation itself increased from a hanful of mainframes to one in every office, then at every desk, then a few in every home, and later one in every hand. Now it's at 100% or close.

It also still required computer operators. LLM are not mere increased productivity of a human computer operator, but automation of productivity so that it can happen without an operator (or with much fewer).

Moreover, all this "increased productivity" still left wage stagnant for 40 years (with basic costs like housing, education, healthcare skyrocketing). It's not like more of it, in the same old corporatism context, bodes better for the future...


> LLM are not mere increased productivity or the computer operator, but automation of productivity so that it can happen without an operator (or with much fewer).

Enabling the same production with fewer workers (or, equivalently, greater production with the same number of workers) is the definition of a productivity increase, not something that constitutes a difference in kind from a normal productivity increase.

> Moreover, all this "increased productivity" still left wage stagnant for 40 years

Not in computing it didn't. Same job category pay rose in real terms over almost any window you choose in the last 50 years, and in most cases the distribution of jobs also moved over time from lower-paid to higher-paid categories within computing.

Also, even general real wages didn't really stagnate for 40 years, average (mean) wages dropped slowly for 20 — mid-70s to mid-90s, and mostly have slowly climbed since, crossing over about 30-ish years after the past peak, but the same effect isn't seen in median wages (though that also was low in the early 1980s and most of the 1990s, before mostly rising strongly) or median personal income (which, despite short drops around recessions, has been rising consistently strongly since the 1981 trough.)


>Enabling the same production with fewer workers (or, equivalently, greater production with the same number of workers) is the definition of a productivity increase, not something that constitutes a difference in kind from a normal productivity increase.

Of course. But "greater production with the same number of workers" vs "the same production with fewer workers" is already a difference in quantity (of both production, and, the thing pertinent to the discussion, of workers).

And there's also "greater production with fewer workers" - where you get to have your employer pie (fewer workers) and eat it too (still get greater production).


Yes, but you’re just reiterating the basic Luddite argument, and taking IMO a blinkered perspective by looking at it from the perspective of a single firm rather than the economy as a whole. Often, increasing labor productivity in a given category actually increases the total quantity of such labor demanded in the economy due to the Jevons effect. If you increase output per worker by 10x, sometimes the product becomes cheap enough that the quantity demanded goes up by 100x and the economy ends up needing 10x as many workers. (Numbers for purpose of illustration of course.)

In other words, sure, there’s a limiting factor in terms of how much software the world actually wants, but the more software we can produce per programmer, the cheaper software gets and the more software the world wants. There is still eventually a limit here, but it is a lot farther away than it looks.


I'm not sure why your getting down voted so much. This is literally what Henry Ford and others did. He raised wages for staff to be the highest in manufacturing, then improved their throughout by streamlining and removing wastes, then he lowered the cost of cars to customers making the market for them change from a few per year for only the ultra wealthy to what it is today. Thus expanding the needed workers.


Did you run this by the risk, global relations, and legal guys/gals?


> Previous "software dev productivity increases" happened as computing saturation itself increased from a handful of mainframes to one in every office, then at every desk, then a few in every home, and later one in every hand. Now it's at 100% or close.

previous 'computing saturation is at 100% or close' pronouncements go back to 01953 https://geekhistory.com/content/urban-legend-i-think-there-w...

this human body weighs about 100 kg and occupies about 100 liters. it contains about 30 trillion cells, each of which contains something like 10 million ribosomes, 300 quintillion in all, which are programmable machines that construct proteins by executing a digital program on a dna tape, although the program is quite limited, more like a player piano roll than a computer program

current sram bits are on the order of 20 nanometers in diameter, about the same size as a ribosome. this is clearly feasible because these devices are already in mass production. you need on the order of 4096 of these, plus a roughly equivalent number of transistor-like switching elements (which are smaller), to get something we would recognize as a computer (i.e., you can program it in c or basic or assembly rather than verilog or abel or something). multiplying by a safety factor of two, we're talking about 32768 20-nanometer-sized elements, which is a cube 640 nanometers on a side

obviously (?) such a computer can't be manufactured by current manufacturing techniques, but also obviously it will work once you figure out how to make it. quite likely you can improve on that by orders of magnitude; ribosomes, after all, are considerably more capable than a 1-bit memory cell

you can fit 400 quadrillion such computers into the space of the human body, a bit over ten thousand per cell

so quite plausibly we are still 20 orders of magnitude away from personal computing saturation, and after another 12 orders of magnitude people will be saying things like 'then a few in every home, later one in every hand, and finally one in every cell. now it's at 100% or close'

even this is overly pessimistic, though. the obviously-workable computer outlined above weighs about 2e-16 kg, and jupiter weighs about 2e27 kg, so if you convert jupiter into computronium, you can get about 1e43 computers, roughly 1e33 computers per currently living person. and the milky way is about 1e12 solar masses, which works out to 2e42 kg, so a milky way of computronium would be roughly 1e58 computers

converting most of the milky way to computronium is less obviously feasible or desirable than putting anticancer robots in every cell but it suggests that we're closer to 48 orders of magnitude from computing saturation

as for corporatism, corporatism seems very unlikely to become established in the current political environment outside of backwaters like argentina. of course, the future is enormously unpredictable, but to me corporatism seems like an idiosyncratic response to the political conditions of the 01920s


>previous 'computing saturation is at 100% or close' pronouncements go back to 1953

Yes, pronouncements can come early. They can also come at the point in time that they hold. Previous pronouncements having come early doesn't mean the same pronouncement will never hold.

In any case, those pronouncements didn't match an 1:1 (or even 3:1, considering laptop+work computer+smartphone) ratio of computer to person.

>obviously (?) such a computer can't be manufactured by current manufacturing techniques, but also obviously it will work once you figure out how to make it.

In any case, fitting "400 quadrillion such computers into the space of the human body", even if possible, doesn't require 400 quadrillion programmers. Or even necessarily that much programmer. After all programmers genereally don't increase based on the count of cpus (that's a less strong correlation), but based on the number of individual software programs.

Such a development might not even require more programmers than there are today. In fact, if LLMs improve similarly as original GPT to GPT 4, or (even worse) if AGI is achieved before those nanocomputers, their software might require exactly 0 programmers.

In any case, the eventual (?) achievement of those "400 quadrillion such computers into the space of the human body" (while still waiting for flying cars, robot servants, and cold fusion) would be so far ahead to make the point moot regarding the job prospects on programmers in the industry given the raise of LLM in the next 30-40 years.

>as for corporatism, corporatism seems very unlikely to become established in the current political environment outside of backwaters like argentina.

Outside of backwaters? Corporatism has been the status quo in the US for several decades now...


i don't see how saying that 20-nanometer-diameter sram cells and switching elements will work could be any farther from handwaving; they're already in mass production, forming the bulk of shipped processors for cellphones and laptops this year

the only handwaving in the bit you quoted consists of saying that it's not possible to build computers where the entire computer is less than a micron across with current manufacturing techniques

it's certainly true that computers don't require programmers. i think it's easier to reason correctly about the issue the other way around; programmers can do anything, but they require computers to do it

50 years ago, if you wanted to cycle the current in a voltammetry lab setup or make your windshield wipers intermittent, you designed a circuit. if you wanted to get a screw machine to cut a new kind of screw, you probably cut some new cams out of steel sheet. if you wanted to retard the spark timing on your engine ignition, you adjusted a screw

now in all those cases you just write a program, or perhaps even change some parameters to a program, because all those things are controlled by computers now. so suddenly you have lots of programmers working in these areas

today, if you want a ditch dug, you don't write a program; you rent a backhoe or pick up a shovel. but that's just because your dirt isn't programmable yet

the flying cars problem is well documented to be a question of regulatory obstacles and governance, not technical capabilities. lots of people do fly ultralights today, you can find videos on youtube

will the same problem force you to move your dirt with a shovel 50 years from now instead of just telling it where to go? yeah, plausibly, but that's just a question of amish-style or tokugawa-style rejection of technology, not a question of saturating the possibilities

as for corporatism in the usa, it definitely isn't a thing. possibly you just don't know what corporatism is https://en.wikipedia.org/wiki/Corporatism


Are you by any chance a paperclip factory? /s


ask again in 53 years


Typically that's not how these things work at least in the time frame that you don't starve to death.

Leading up to 2008 you'd think the market would optimize for lenders that checked who they were giving loans to. But that's not what happened. The idiots kept giving out shit loans until the entire market burned down taking out good and bad lenders alike in the aftermath.


You can witness this happening in the trades right now. A whole generation of people were told to goto college and to avoid the trades, and now here we are in possibly the most significant manpower drought the trades have ever experienced. And this has a ripple effect as the older generations retire out, and take their hard won experiences with them with nobody to pass their knowledge onto. Can't tell a carpenter to go type that shit into Confluence, let alone tell the kid to look in the knowledge base first.


And yet the trades still have uneven access (or none at all) to health coverage, retirement planning options, etc.

As an American parent of young children, I keep being told that college is a scam and I should steer my kids toward the trades. 90+% of the time, I am being told this by a white-collar worker who went to college themselves, and is just bloviating.

When we reach a real crisis point, severe enough to actually consider granting skilled tradespeople access to a fraction of the privilege enjoyed by white-collar workers, then I might consider nudging my kids toward electrician or plumbing work. But under the current social caste system, of course I am going to do everything possible to give my kids access to college and steer them that way.

I believe that virtually everyone, white-collar and blue-collar alike, quietly feels likewise. We make a pretense of giving contrary advice, but mostly just in hopes that other people will move in that direction for us. To take the bullet and help with this imbalance, and also to relieve the intense competition our own kids face.


> I am being told this by a white-collar worker who went to college themselves, and is just bloviating.

Exactly. When I talk to plumbers, electricians, etc. many of them express the desire to leave because the hours and environments are hellish. Meanwhile some full of themselves tech bro is babbling on about how everyone (not them of course) should go into the trades. Or they pull some vague anecdote out of their ass about how someone they know makes a gazllion dollars in the trades after 20 years and starting their own business, which is about as valid as telling someone to go into software development because they can become a billionaire, and throwing out some anecdote about a startup founder they know who got aquired.


My read is that the people who do well in trades are smart, hard-working and ambitious. That combination of traits tends to do well no matter where they are applied.

While there is plenty of money to be made in the trades, one thing that gets ignored is, as you said, the working conditions. Further, those working conditions compound over the years and absolutely wreck bodies.


Sitting at a desk for 8 hours a day will wreck your body too, don’t worry about that.

The happiest I’ve been in my life is spending about 2-3 hours a day at a desk. It’s a shit life but we don’t see it like that coz we love sitting on our ass.


> Sitting at a desk for 8 hours a day will wreck your body too, don’t worry about that.

But not anything like 8+ hours a day of manual, repetitive, physical labor!

Come on, there's no comparison to desk work.


Sitting at a desk for 8 hours a day will wreck your body too, don’t worry about that.

Except that can be trivially counteracted by get up to stretch every hour, taking a 20 minute walk at lunch and hitting the gym a couple of times a week.


>wreck bodies.

I think how hard the trades can be on your body is under appreciated


It is an interesting game we are playing as a civilisation since without people skilled at making our material environment the quality of life we enjoy will most likely drop.

We seem to have structured things in a way where what is individually optimal and desired is very opposed to what we need at larger scales. It does seem like the system is maintained only by inertia at this stage.


My brother in law is an electrician.

He does not have paid vacation, good sick leave policies, or good health insurance through his employer. He has witnessed a bunch of on-the-job injuries and one near-fatality, largely caused by his employer pushing hard for the team to complete jobs as fast as possible. He is paid alright, but less than the norm for the people I know with college degrees even after we exclude everybody in software. His job is also physically demanding and may cause problems later in life.

Not exactly a "hey, pick this job and you'll have a great career" story.


The only solution is to rapidly scale these up so that they can disrupt every aspect of everyone's life until we all decide to throw our hands up and decide it's time for a new social construct.

I'm surprised someone hasn't replace politicians with an LLM. Imagine not having to pay their salaries when ChatGPT can send "thoughts and prayers" to Maui over Twitter 24/7.


>until we all decide to throw our hands up and decide it's time for a new social construct.

In my opinion this is the most optimistic of the realistic possible outcomes. In the past when automation put a factory worker out of a job, they were just told to go back to school or "learn to code" which isn't actually a solution for most people. These LLMs disproportionately impact people further up the socioeconomic ladder than prior waves of automation. Maybe our uneven society means that this wave of distribution of a more powerful group of people will be more likely to cause an actual change to how we organize society.


I am absurdly and instinctually confident that LLM personas will replace at least the public face of politicians almost instantly. The problem is the back end work.


Politicians are just a perfect role to disrupt. Problem is of course they are a monopoly so can't be competed with and are protected by law/constitution so can't be changed directly.

Their whole job is to 'represent' their constituents. A LLM can poll the sentiments of the people far more effectively than they can. I'm sure it could be programmed to accept bribes too to weigh rich people's opinions higher. I'd love to see votes done by 100 different LLMs instead of Senators (a hyperbolic, non literal statement but interesting as a thought experiment I hope)

Politicians should still propose new and altered legislation but actually voting, and being informed to vote, could be massively improved.



It’s not a new social construct. It’s a very old one: a thin layer of rich people, and a large number of poor people fighting to survive because they don’t have anything to contribute beyond their own manual labour.

Head out of the developed world and you can see this type of society everywhere.

This meme of AI -> upheaval -> basic income utopia has got to die. It’s wishful thinking. It’s “clean coal” for programmers.


What do you suggest instead? Because without UBI or redistribution of profit when everything is automated, I don't see any other solution.


"The privileged will risk absolute destruction over the surrender of any advantage"


Is this satire? If you think politicians can be replaced, then you must be dreaming quite hard sir. They are running the show.


This was a plot point in Avenue 5. The office of the other president.


> The threat of these models isn't that they'll go all Skynet and kill everyone, it's that they'll cause a lot of economic devastation to people who make a living

Yes. This is pretty much my only concern about these models, and I'm powerfully concerned about this. It's hard to see how this will lead to a good place. It seems more likely that this will lead to increased poverty and multiple socioeconomic crises.

I am even more concerned that very few people are talking about this, and none of the power players in this space are, except for occasional mentions in passing of fantasies like UBI.


> I am even more concerned that very few people are talking about this.

People have been talking about the threat of automation since the very beginning of the industrial revolution. It just never plays out nearly that badly, and short-term disruptions are always outweighed by long-term efficiency gains within ~5 years or so; even those who experience the worst career disruption tend to end up better off within that time frame.

I certainly would not like for my career to be disrupted for ~5 years, but the alternative would be worse.


> even those who experience the worst career disruption tend to end up better off within that time frame.

That hasn't been my observation at all. In the US, there are large swaths of the nation that still haven't recovered from the last similar event.

To add additional worry, the last time, everyone was told that the way out was to "upskill" into knowledge and service industries. Which a lot of people did, and those people were fine. But what are people to do this time? "Upskilling" back to physical jobs can only absorb so many workers, particularly since there aren't as many such jobs as there used to be.

This is all why I'm so concerned. I don't think history gives us any real reason to be optimistic here. In the very long term -- a couple of generations, say -- perhaps. But in the meantime? Even ignoring the ethics of some people deciding that others are expendable, the people being kicked to the curb will still have to find a way to eat, keep a roof over their head, etc.

If even 10-20% of the population can't do that, we're in big trouble.

All that said, nothing would make me happier than for you to be right and me to be wrong.


> People have been talking about the threat of automation since the very beginning of the industrial revolution.

And the prediction that only the ownership class would benefit from technological improvements is at least as old as Marx.


And it's proven to be unequivocally true hasn't it? Just take a look at the accumulation of wealth since 1970.


I am very curious about what pre-1970 hardware and software you posted this from.


I've been asking for years, if we have all these computers, why do we need so many people in offices? Now we seem to have passed "peak office", with much help from the pandemic.

If everything you do for money goes in and out over a wire, be very afraid.


From my vantage, short-term (3-5 year) fear seems unfounded. As a software engineer, I can clearly see what ChatGPT and its LLM ilk can and can't do that I can easily do myself. LLMs clearly accelerate my access to API documentation and provide excellent outline code. But hallucinations are omnipresent and can often necessitate additional iteration and rethinking of development approaches. I think the productivity boost due to LLM usage is smaller than many credit them for. The intensity of employment displacement fear comes from an illusion that LLMs have agency. AutoGPT is not much more than an experimental repo and there isn't a viable alternative yet. "WHEN YOU COMMAND AN LLM, YOU ARE THE AGENCY", LLMs are mere extensions. Don't sell yourself short, prompt crafting/engineering is where the agency lies and requires real knowledge and context to empower you effectively use them for successful software engineering.


> The intensity of employment displacement fear comes from an illusion that LLMs have agency.

I don't think so. My concerns have nothing to do with agency, anyway. Nor are my concerns limited to (or even primarily about) impact on software engineering specifically.

Even if LLMs perform worse, if using them will save companies money over employing people, then those people are gone.


LLMs have agency when given a goal and connected to an API that can do something. This is likely to be a problem now and then.


I don't think so. Junior engineers will learn much much faster than the past (think about how much more effective GPT4 is as a learning tool than the "rubber ducky method" or manpages or even stackoverflow).

And a part of their role will morph into prompting GPT4 (much like this senior engineer has started doing).

If GPTx ends up in the narrow area where it's universally smarter than junior engs but definitely not capable of being a senior eng, then junior engs will just shift to the little remaining work for senior engs, shadow them for months to years like an apprenticeship.

Of course in that case the total number of eng needed will also decrease (already only a small percent ever get good enough to be considered truly senior), so there will be selection bias toward more intelligent engineers who are a step above GPTx. If none are left, then the profession will be gone and there will be no problem.


> I don't think so. Junior engineers will learn much much faster than the past (think about how much more effective GPT4 is as a learning tool than the "rubber ducky method" or manpages or even stackoverflow).

That's bunk. The OP is literally "feel[s] the need to hire junior" engineers because he can ChatGPT that work. How are they going to learn a job they won't be given the opportunity to have much faster?

> If GPTx ends up in the narrow area where it's universally smarter than junior engs but definitely not capable of being a senior eng, then junior engs will just shift to the little remaining work for senior engs, shadow them for months to years like an apprenticeship.

That doesn't make much sense. That kind of apprenticeship would be pure charity, so it's not going to happen. No one is going to learn to be a senior engineer in "months," and no one (except someone's rich parents) is going to pay for someone to sit around unproductively in and office for years while they learn. Even interns are required to produce output that adds value. They do that by successfully completing junior-level tasks that need to be done well.


It's not charity, it's capacity planning but at a lower scale than before.

There's always a set of junior eng with high growth potential who are being trained to become senior. They will continue be hired, albeit with less simple work than before, because most companies do run scenario planning like "what if X left who is our backup".

The juniors who are not expected to ever grow to that level would no longer be sought out for simpler tasks as those tasks will be automated away.

Net impact is fewer total engineers, but those who remain are at a higher level of average skill.


When I hire juniors it is because I’m placing them on a path to independent decision making and autonomous work, as fast as possible.

They are to become an independent person, taking independent actions, aligned with the current vision and goals.

I don’t want a permanent increase of my own workload which is what working with chatgpt feels like.


> I don't think so. Junior engineers will learn much much faster than the past (think about how much more effective GPT4 is as a learning tool than the "rubber ducky method" or manpages or even stackoverflow).

That's bunk. The OP is literally "feel[s] the need to hire junior" engineers because he can ChatGPT that work. How are they going to learn a job they won't be given the opportunity to have much faster?

> If GPTx ends up in the narrow area where it's universally smarter than junior engs but definitely not capable of being a senior eng, then junior engs will just shift to the little remaining work for senior engs, shadow them for months to years like an apprenticeship.

That doesn't make any sense. That kind of apprenticeship is pure charity, so it's not going to happen.


You think GPT4 is more effective than learning how to read a manual? Or some of the best SO answers?


Yes, even GPT3.5 is better. I am in uni, and LLMs are probably the best teachers I have had the experience to learn from(and I have had some great teachers and professors). They work even better if you feed them the content of a book/manual/documentation as a reference.

They do suck at solving problems correctly, however if you give them an incorrect solution and ask them to spot mistakes, or just ask for a general method to do a problem, it works out.

However, they might not yet compare to the best of humans. The best SO answers probably represent 0.01% of the answers, which is a high bar. I am certain very amazing teachers and professors exist out there in the world whom LLMs can't beat yet but the average can't compete.


> Yes, even GPT3.5 is better. I am in uni, and LLMs are probably the best teachers... They do suck at solving problems correctly...

The discussion was specifically about LLMs to write software. Not about university essays or articles or exams. Are you claiming GPT3.5 is better at writing bug-free software than the average software engineer?


No, please read my response again. My claim is that GPTs are better than human teachers, for most* domains, including software.

However, I do think a framework needs to be developed for formally learning any particular topic. If you are self learning using just chatgpt, you might miss out on a few key things. I haven't used it much personally but the khan academy bot is close.


Yes, mainly because of the work of experts which went into GPT4.

For example, Llama is nowhere close (even if it's pretty good).

You can think of GPT4 as a way to flexibly access a lot of knowledge from domain experts. Sure, sometimes that flexibility hallucinates things, but it mostly works and we can verify a large part of it.


GPT explains things in a personalised way, it's in no way in the same league as manuals. It's each user's native language.


Why would you take a junior then as an apprentice if they won't generate any value at all?


Computers have been putting people out of jobs since back when "computer" was a human job title and not a machine. Its always ended up creating more jobs than it eliminated in the end, i don't see why the future will be any different than the past.


Seems like a logical fallacy to assume that the past extrapolates cleanly into the future. "It was okay last time" isn't a good enough argument.


Well i mean unless someone comes up with a theory why this time is different from last time, then isn't my argument just modus tollens? That's pretty much the opposite of a logical fallacy.

This time may very well be different, but there would have to be some additional factor and nobody has given a compelling answer as to what that might be.


> Well i mean unless someone comes up with a theory why this time is different from last time, then isn't my argument just modus tollens? That's pretty much the opposite of a logical fallacy.

History isn't math, dude. This time is always different from last time. The fallacy is making the claim that it's the same (conveniently ignoring all the differences that make it different).

Another mistake is taking an aloof perspective. A lot of changes that "turned out OK" from that perspective were pretty terrible for the people who actually had to live through them.


> (conveniently ignoring all the differences that make it different)

If someone wants to bring up some of those differences by all means. The reason i am unconvinced is because nobody ever does.

> Another mistake is taking an aloof perspective. A lot of changes that "turned out OK" from that perspective were pretty terrible for the people who actually had to live through them.

Sure, i'd agree. But this is moving goal posts quite a bit. If the claim was simply that some industries might experience some levels of short term disruption due to an emerging technology like AI and it will probably suck for the individuals being disrupted - I don't think anybody would disagree. It also wouldn't make AI exactly unique - short term disruptions in various industries due to changing conditions happen all the time.


Every time there was technological advancement the number of jobs for horses increased. For example the invention of the railroad meant they no longer carried the mail long distances, but still the greater economic activity lead to greater demand for horse labor. There may have been some shifts in what jobs they did, but the trend was clear. More technology lead to more horse labor.

And then the automobile was invented. And over the next few decades the demand for horse labor tanked. Now the demand for horse labor is a tiny fraction of what it was a century ago.


> I no longer feel the need to hire juniors.

I hear that from a friend in the legal business. Less need for paralegals. Unclear yet if the need for new lawyers will be reduced.


maybe paralegals could be replaced by even cheaper clerks with ChatGPT ?

why pay Law school graduate as paralegal, when you can hire associate degree grad with ChatGPT to do the same work?


>> it's that they'll cause a lot of economic devastation to people who make a living through labor requiring skill and knowledge, especially future generations of skilled labor.

Occasionally I would see clips from or read reactions to Idiocracy, and be left scratching my head, because somehow, somewhere, there have to be the people who are thinking. The whole conceit of the film is that there are no smart, curious people because it's being bred out of the population. That never made sense to me because you still have to have some smart, curious, creative people somewhere to keep things moving. Our society is quite dependent on the people who silently keep things running in the background.

I can however envision a world where early curiosity is discouraged, and supplanted by a technology that can fill the holes of the entry-level smart people. When everyone is discouraged from starting, and the existing participants age out, then maybe you can get a world where there are no new smart, curious people.


> That never made sense to me because you still have to have some smart, curious, creative people somewhere to keep things moving. Our society is quite dependent on the people who silently keep things running in the background.

Regarding Idiocracy, once of the background conceits of the film is those kinds of people set up automation to keep things going before they died out (for the reasons clearly explained at the start of the movie). If you pay attention everything in that world is automated: a diagnostic machine with a playskool interface (https://www.youtube.com/watch?v=hmUVo0xVAqE) is what's actually doing the doctor's job, a major company is run by a computer the CEO doesn't understand (https://www.youtube.com/watch?v=jBFREFtFEgs), etc.


I use ChatGPT very actively for programming among the other things and at no point feel threatened, rather empowered. No burnout either as I just work as usual. It just replaced Google Search and lots of typing.


> more senior-level people has been shut down (like the OP is doing).

I am already seeing this, companies are desperate for senior developers but at the same time they don't want to hire juniors.


>it's that they'll cause a lot of economic devastation to people who make a living through labor requiring skill and knowledge, especially future generations of skilled labor.

If a task can be completed satisfactorily by an automated computer program, was the task really "skilled labor"?

I ask this sincerely, because some of the occupations being replaced/evicted (eg: copywriting) were clearly given more skill value than they should have.


You could've said this about calculators


It is endgame when AGI comes to fruition (which it will, it is just a matter of time, either in 10 years, 50 years, etc...), robots with AGI will be the ultimate life form in the universe and the final evolution of "life" on earth. It is laughable to think something so much more intelligent than people will somehow become a slave to us and do our bidding.


Can you elaborate on how you're actually using ChatGPT? I'm a developer and I haven't felt any need to use ChatGPT constantly.

What tasks are you delegating to ChatGPT that were previously done by humans? Most of my input from others is regarding current information specific to the task at hand. I don't see how ChatGPT would have any idea what I'm talking about.

Do you have some specific examples you could share?


I have a bunch of examples myself. Here's a good recent one (prompts are linked about half way down the post): https://simonwillison.net/2023/Aug/6/annotated-presentations...

A few more:

- "Write a Python script with no extra dependencies which can take a list of URLs and use a HEAD request to find the size of each one and then add those all up" https://simonwillison.net/2023/Aug/3/weird-world-of-llms/#us...

- "Show me code examples of different web frameworks in Python and JavaScript and Go illustrating how HTTP routing works - in particular the problem of mapping an incoming HTTP request to some code based on both the URL path and the HTTP verb" https://til.simonwillison.net/gpt3/gpt4-api-design

- "JavaScript to prepend a <input type="checkbox"> to the first table cell in each row of a table" https://til.simonwillison.net/datasette/row-selection-protot...

- "Write applescript to loop through all of my Apple Notes and output their contents" https://til.simonwillison.net/gpt3/chatgpt-applescript


I dunno what you do for a job but if that is your hard problems or even medium problems then I think you lucked out big time.


I never said these problems were hard.

But... they're things that require research. Do you know how to loop through all of your Apple Notes using AppleScript off the top of your head?

That research isn't free: it takes time.

As someone who hasn't done much work with AppleScript before, I would guess it would take me about half an hour to figure out how to do this. But there's a risk that it might take longer.

So the sensible thing is probably not to take on that project at all! I don't care enough about solving it to invest the research time.

But the time taken to chuck a prompt through GPT-4 and then test the results to see if it works is less than a minute.

I wrote more about how this is encouraging me to be more ambitious with my projects here: https://simonwillison.net/2023/Mar/27/ai-enhanced-developmen...


100% agree with this. I used ChatGPT earlier today to give me networkx code for computing the connected components of a graph and visualizing the graph. This isn't hard to do, but I don't use networkx all that often and I forget the exact API. I could go to read documentation and piece together an example myself, or I could ask ChatGPT for an example, which tends to be much faster than doing the former.


I Googled "applescript loop through apple notes" and this was the first result: https://apple.stackexchange.com/questions/225938/notes-scrip.... Looks exactly like what ChatGPT came up with for you minus the dialog.

I then Googled "applescript print to console" and got this: https://stackoverflow.com/questions/13653358/how-to-log-obje... ; admittedly this answer took a few minutes to read through, but the second most upvoted answer recommended using log, which your ChatGPT-developed solution (eventually) used.

This all took me < 5 minutes.


Yeah, those StackOverflow posts are the perfect illustration of why I find ChatGPT so much useful for this kind of thing.

Compare them to my original transcript. They don't provide me with enough information: how do I actually run that code?

I got ChatGPT to split me out a zsh script, which caused it to show me how to use "osascript -e ".


You don't get the point.

I work in Deep Learning Research. I don't get any help at all from ChatGPT for my core job. Copilot spews gibberish, too.

I do get enough help about peripherals. Some weeks ago, I needed help with Flask and HTML deploying a model to show it to stakeholders. (I learned Flask some years ago, but not needing it regularly, I forgot enough.)

The data cleaning, preprocessing, model training, making it better than humans were the hard tasks.

Deploying a Flask app with a simple HTML frontend was the easy task. But easy != free. It would have required 2-3x more time researching how to do exactly what I needed, which I did with Copilot and ChatGPT in ~1 hr.


I think Simon has a lot of experience solving hard problems: https://simonwillison.net/about/


That's irrelevant, we are taking about solving hard problems with ChatGPT none of the examples are even slightly complex.


Not sure where this idea that my post was about "hard problems" came from.

I'm excited about using ChatGPT/GPT-4 because it takes the time I need to figure out how to do something "easy" down from half an hour to sub-5-minutes.


They’re probably not hard and medium problems but if you can take care of these easy and medium problems with chatgpt, that saves so much bandwidth to have time to tackle the fun and “hard” problems.

Also, take a look at the problems and projects simonw actually tackles.


I use ChatGPT for similar. I can spend time looking into various languages/libraries I don't use often and figure it out. Or I can ask an LLM and have an answer I can validate in the same amount of time I start browsing search results.


You're missing the point. These LLM which are replacing "junior developers" aren't what's going to be used for "hard or even medium problems" because you don't just give junior developers "hard problems" to solve in isolation. Now try to give some thought as to the types of problems junior developers are suited to solve. Now apply ChatGPT to those scenarios.


I felt the same, SWE with 20 years experience. Absolutely none of this is relevant to anything I’ve been paid to do for my whole career.

If someone on my team was doing things this unusually they’d probably be let go.


Dude, simonw invented Django web framework


sorry but these examples are not impressive at all and by no means a representative of any serious programmer's workload.

Programmers are paid not to bang out code, but rather to figure out the mess and crap of the existing codebase and how to selectively add one-two lines to change system's behavior and keep stability of the system.


Part of the trick of making the most of LLMs is figuring out what kind of things to apply them to.

I wouldn't expect much from them for figuring out gnarly changes in huge existing codebases (at least not yet).

But they've been actively encouraging me to write smaller, tighter tools - the good old Unix philosophy - for which they are extremely well suited.


This comment makes me feel as if I've been taking a wrong approach to using LLMs in my day to day work. I haven't been able to get much value out of them as a large majority of the work I do requires a good deal of context with some repo specific problems.

Our existing tooling and helpers lack modularity to begin with. I'm now thinking it would be a better approach to start with having it build smaller tools and patch them together in more useful and interesting ways myself as opposed to being upset that it can't deliver complex, context aware solutions.

Still not buying into the hype that LLMs will replace all software engineers by 2030 due to thinking the nature of our work is not to write code but to solve problems, regardless of the tools we are using, but I definitely see potential productivity gains from using the tool with a different approach to what I have previously attempted.


So basically llm is the new short shell script.

Fair enough, but i also don't really feel this is threatening anybody's job.


I think it's pretty useful to share concrete examples like this. I basically always have a ChatGPT4 CI tab open these days when I'm developing these days - it's a usually much faster/better go-to than Google or SO for looking stuff up. ChatGPT is great for all the random stuff I don't care to remember the syntax for.

* I'm always using it to munge/generated tables/csv/markdown/json - you can basically throw any copy and paste from a random PDF that's some weird gobbedlygook of tabs, spaces, newlines and get something cleanly formatted. On the one hand, it seems like a waste of computation, but on the other hand, it's way cheaper than my time and there are so many tasks that require using poorly formatted output. Even better, CI will of course write awk/sed for you if you need to do any automation.

* I'm always forgetting the syntax for named byobu sessions (it happily wrote a script to help with that) but I've also been staging some dev servers and it was able to generate the scripts to create new named session and windows, attaching/creating when necessary, handling if the processes were running, and creating the systemd units for spinning these up.

* On this same project it wrote some python scripts for managing SSH tunnels and reverse tunnels, including filtering/logging of error messages, handling jump servers, etc. This is all stuff I've done years ago (and even written lots of docs for), but it was actually way faster for ChatGPT to generate these than digging those out.

* I've been running into issues w/ some HTML5 audio output and needed to swap to websocket streaming w/ webmedia output (which I wasn't familiar with at all). ChatGPT gave me the code to swap into my FastAPI server and the frontend code I had w/o having to do any further research, great.

* I hate Docker setups, and I had issues w/ Nvidia containers and GPUs not showing up w/ my docker config. I was able pass the various error messages and get my problems fixed without spelunking/hair-pulling. Same with figuring out some cross-container network hijinx.

* There's a bunch of one-offs that I might just not have bothered doing, that I can just ask it to do as well - eg, I've previously written code for poisson distributions and the like, so I knew what to ask for, but would have been a huge PITA to dig out exactly how to do it, but took like no effort to just ask GPT4 CI to figure out a one off I just wouldn't have done otherwise: https://chat.openai.com/share/80fa7bc0-e099-4577-bad9-d026e7...


Those are good examples. I'm constantly doing weird one-off things in Ruby or Bash, and maybe I'll ask ChatGPT about more of them.

I really wanted to know what vague things OP's humans were doing, but they haven't responded to anything.

I've been reading your blog since the dark ages. Thanks for the great content over the years!


I've been coding 40+ years and I use it reasonably regularly, for things that are trivial time waster type stuff. Like I need some powershell command to automate something that I know will be automatable. Less so for coding, but I have a IDE AI code generator (Codeium) that is often good at predicting what you want to do next, especially for boilerplate type stuff. Then there's the times you are heading into unknown and just need a starting point, for instance I asked it to write a discord bot that did X Y and Z, and it pretty much gave me a good shell of a program. Didn't really have to refer to any other documentation. It's often good at finding ways to do obscure things. Quite often I find it most useful with TSQL stuff, not so much for basic queries, but there's lots of inbuilt toys I've just never come across nor care to spend the time researching. I can't see how it would replace a junior developer though. If anything, it makes it easier for junior developers to get up to speed.


I think that we are the silent majority. ChatGPT can be mildly useful for me as a dev, but in general it actually slows me down. There have been a few times when it has really shined, but it’s not the norm.

If it was actually a life altering tool (and it might be one day) there wouldn’t need to be an entire industry of people trying to convince everyone that with just one small trick Google doesn’t want you to know, you can quadruple your productivity.


It’s immensely helpful for learning. Orders of magnitude better than google which has ruined their search results with seo bait.

At the very least, its a much more powerful google (dont nitpick my comparison, i realize it hallucinates). Getting the EXACT context of your question is something generalized search/articles online will NEVER give you, and you can read hundreds of pages of docs all day. This is good for certain things, but not when you want to know just a single setting or atomic piece of information. I want to get the smallest amount of accurate information very specifically to my problem, as I'm programming many hours per day on my own companies as a one man show.

My search history on chat gpt includes a few things as examples:

- specific ways SOLID principles could be applied to Go which is non-OOP language

- helping me quickly learn nuances of Lua for configuring neovim, specifically for weird syntax or things annoying to google (ie what does # mean) or what does a specific error mean within the context of the configuration

- more efficient top k algorithms than what I was building for learning purposes

- asking to break down big o complexity of certain types of sort functions and whether they differ from n log n

- helping me learn enough rust to do a bug fix Pr that was annoying me

- x vs s in neovim config for keymap modes

- figuring out why Ruby doesn’t implement descending ranges

Etc etc etc


Google hasn't ruined search results instead the services like chatGPT are generating so much spam and crap that Google search can't really keep up. So the service you are using itself is making google and internet worse, and eventually that would turn chatgpt bad as well. Its a vicious cycle.


Google has MOST definitely destroyed their search results. They did this long before ChatGPT.

This is repeated on like a daily basis anywhere online, and lots of people agree. All you will see are basically fake amazon review websites, or affiliate links in everything. The top 100 pages are pure garbage that they choose to favor over actual websites with real reputation.


I'd love to understand this too - my experience has been that I can generally write what I want faster than figuring out what prompt will get something close to right, and then editing/revising it to make it right.

Add to this the limited usefulness for generating code that's contextual - making some method deep inside a component tree that needs to reference a service class, and pick some dom elements to mutate etc... it requires knowledge and reasoning about the project and overall code structure.

I don't understand how folks are using it as a productivity booster, unless maybe as something like a better StackOverflow?


yeah, I would like some examples that sent just trivial. I have failed to use it successfully for anything I cannot simply state in a condensed singular statement or paragraph. And almost none of my work is easily condensed into a single paragraph. Coupled with the complete misunderstanding it constantly seems to have and it's inability to understand nuance...I am struggling to make use of it and actually feel productive. Everything I use with it fails when I attempt to test it and it won't do anything complex because the tokens needed to explain the idea alone are quite numerous. I guess you could have it refactor code...?


The chat part lets you start with a base and you build on top of it, you don’t have to fit it all in one sentence.


For code I’ve found LLM mostly useless, since if I don’t understand something I need to read the docs anyway, and the generated code tends to be buggy even in react.

Where I have found LLM useful is in generating text. Where I used to use a thesaurus I now use LLM to find words to name things in themed UX. But it’s not great at function or variable names, it tends to pick names that look good but don’t precisely describe what something is. LLM is also great at generating text for role play.


I had a lot more success writing some code and have ChatGPT document it than doing the opposite. The documentation tends to be much better written than what I would have done by myself.

Indeed because ChatGPT is excellent at writing text. And because I know exactly what I want to see even if I have a hard time putting it into words myself, I can easily catch the mistakes and hallucinations.

I don't get why there is so much focus on code generating AIs and so little on code analysis. Have AIs do code reviews, write tests and analyze the results, etc... LLMs are awesome at reviewing code, they are able to tell you what's unexpected. And what is unexpected has a good chance of either being a bug or some key element of the code that needs attention. I think I have seen a single article about that, out of hundreds that are about code generation.


After putting some thought into this I think it has to do with the kind of developer you are. In my case I'm usually across 10-20 ecommerce websites doing various semi-unique jobs with relatively simple code.

Largely I use CGPT for work that's boilerplate/LOC heavy but architecture light, things like writing first drafts of React hooks and the like. It's quite good with constraints like use typescript or use X function to do Y.

I usually give it about two goes if it goes in the wrong direction on the first try. If it seems to not conceptually understand what I'm asking I generally just write it directly rather than tinkering with prompts for 20 minutes.

I also have a couple of longer system prompts saved for converting Vue components to React using the house style and things like that using the playground.


> but architecture light

It does fairly well for architecture, if you don't expect too many specifics. It, at least, works as a reasonable sanity check/brainstorm.

All of these LLM becomes less expert the finer resolution you take the context. Keep it high level, and you still have a relatively expert assistant.


I would love to know this too. For me it’s involved too much manual copy-pasting of existing code for context, for it to feel like it’s doing much for me.


For cases like that, copilot (with chat for context) might be more of what you’re looking for. Chatgpt specifically, I’ve been using for very light context / general tasks that I modify. I always consider the trade off between how much time I’m saving by having to write the prompt full of context.


Today I got chatgpt to generate a basic TCP server template in C for an app I'm working on. If I didn't have AI, I probably would have searched for a GitHub gist and there would have probably been a more accurate template.


My main use is TypeScript, which I am using for the first time and struggling with a bit. I'm fine with straightforward type definitions but I often hit complicated situationszi don't know how to solve. Googling doesn't really help because U don't know the abstract terms for what I want to do.

Instead I paste the JavaScript and tell ChatGPT to add type definitions. Mostly it gets it right. If it doesn't, it gets me closer.

I don't use it for JS in general because I'm particular about how I write stuff. Though occasionally I'll lean on Copilot to fill out a utility function.


I do Mac/iOS development and am constantly asking ChatGPT about various APIs and frameworks. Apple's documentation is not great for explaining how to actually use APIs, unless you can find the one WWDC video that explains it or a sample project that they released years ago. I would normally google for sample code, blog posts, tutorials, or Stack Overflow posts. Something that might take an hour of searching and reading now takes a few seconds of just asking ChatGPT.

Even for things that I've done before, it's often much easier to ask ChatGPT how to do something than to look through my projects to find how I did it previously. It might sound lazy, but if it takes me several minutes to search through various projects to find that one time I did something, why bother when I can just ask ChatGPT and know in seconds?

I will say that yes, ChatGPT can hallucinate APIs that don't exist, and that can be annoying, but even if it does it 20% of the time, it's still incredibly valuable in the time savings the other 80% of the time it does hit.


I wonder how this is affecting what you consider knowledge going forward. This strikes me as students using google to answer homework questions and forgoing the actual “learning” part.


I don't really think of it that way. I've been doing Mac and iOS development for over ten years now. A lot of the info I gleaning is not design techniques or stuff that I feel is worth memorizing. It's more what functions are available to do something and what types are needed to interact with an API.

A common thing I've searched for, for instance, is the various date formatting options and types I need for managing time zones. I suppose I could sit down and learn the plethora of options, but I don't see that is information that's worth memorizing internally. Similarly, I suppose I could really internalize the complete syntax of regular expressions, but is it worth it? I've used them so many times before ChatGPT, but I've not memorized absolutely all the options available to me.

The other side of the coin is this is allowing me to make so much progress that I'm needing to even use more APIs than I would've previously. If I had done it the old way, I might only have time and energy to devote to a small number of tasks, but with ChatGPT I can "explore" more territory than I wouldn't have previously.


if you use elasticsearch and not familiar with elastic search's syntax (I am not), you could use ChatGPT to write elastic queries for you.

same for SQL, if you are not familiar with SQL.

probably could be same with Splunk SPL, Kibana KQL, Prometheus PromQL, or any other DSL that you are not familiar with


the problem is that you dont get to know if the output is A-Okay. You just know "it works". This is the scariest part to me. Especially a DSL/programming language I'm not familiar with.

I want to contribute, while being fully aware what I'm contributing with. This doesn't lend itself to that.


type a function signature and an opening squiggly brace, wait for "copilot" to autocomplete, press tab, ????, profit


> 1. Humans work 9-5 (or some schedule), but ChatGPT is available always and works instantly. Now, when I have some idea I want to try out - I start working on it immediately with the help of AI. Earlier I just used to put a note in the todo-list and stash it for the next day.

This sounds like the root of your problem, and entirely on your ability to enforce boundaries (which you may or may not have set for yourself). No judgment here; I think we all have struggled with this at one time or another. Or, you know, constantly...

> 4. I tried to put a schedule to use it - but when everybody has access to this tech, I have a genuine fear of missing out.

I definitely know that feeling. I think the likely outcome writ large is that this FOMO feeling will eventually subside. The economy for years has needed more developers than were available; ChatGPT and friends will result in individuals being able to do more and soak up demand that way instead of increasing supply. The long-term negative effect of this is more likely to be depressed wages instead of massive unemployment in the tech sector.

> 5. I have zero doubt that AI is setting the bar high, and it is going to take away a ton of average-joe desk jobs. GPT-4 itself is quite capable and organisations are yet to embrace it.

Another way of looking at it is that its going to create a number of desk jobs, but those who can't adapt to the tools on the market will suffer in the same way that people who couldn't adapt to the use of spreadsheets, word processors, etc, certainly had fewer job opportunities than those who did. Some people are going to get left behind, no doubt—this is why I'm in favor of a robust social safety net. But even with questionable public support for those people, I don't think anyone today would suggest we should retreat to an economy that didn't have such basic tools as spreadsheets and word processor apps today.


Interesting observations. For context, it looks like you are a software engineer from your comment history, is that correct?

I'm wondering why you're feeling the need to hire juniors because of GPT-4. Is it because GPT-4 has taken up the cognitive load capacity you need for mentoring juniors, or do you feel like GPT "obsoletes" less experienced people?

I think ChatGPT's advice is on the right track. It sounds to me like your experience of using it is kind of like my experience of pairing with someone else of equal-ish ability: productive, but draining, due to the need to constantly pay attention. If so, why not treat it similarly? Most people don't pair all day every day, probably because of the aforementioned cognitive load of doing so.

Last, but not least, while this may seem obvious, you should remember that you are human and not a machine. You need to separate yourself from this thing for at least some portion of your day. The constant stress (and, yes, that dopamine rush you feel when you use it is a kind of stress -- stress isn't always a purely negative thing) will take its toll on you eventually. That's the "burnout" you're perceiving, and the only way to prevent it is to just not let it happen.

Take care of yourself. Socialize and interact with humans, especially close friends and/or SO's as applicable. If you have a pet, spend some time with them. Take a walk.

But, most of all, remember that GPT-x, as smart as it may appear, can't actually learn anything from experience. It can only learn from an expensive and labor-intensive process, and once its training is done, it's frozen in time forever (modulo some fine-tuning, which is essentially an extension of said labor-intensive training process). And, at the end of the day, that just makes it a very versatile, very expensive, and very useful tool, but a tool nonetheless.


I've experienced a very similar feeling.

To me it feels exactly like finding wikipedia in 2005, or getting an iphone + wikipanion in 2008. The frontiers of my mind have been unleashed. A real bicycle for the mind.

Here are some tactics I use to "turn off gpt":

1. It'll be there tomorrow. The great thing about their threaded model is you can easily find the convo and continue it tomorrow. Remind yourself of that consciously (or tape it to your monitor!)

2. You're not behind, you're ahead. 80% of Americans haven't tried chatgpt. 95% of the world maybe.

3. Don't worry about juniors. They'll still be hired because now they'll ramp up faster and produce better code, using the same tool you're using. Same thing that happened when stackoverflow became popular and junior devs stopped "reading the source code" or "reading man pages."

For all the limitations of GPT4, it truly is great at coding. Exciting times.


> 2. You're not behind, you're ahead. 80% of Americans haven't tried chatgpt. 95% of the world maybe.

idk if anyone realistically compares themselves to the abstract nebulous "everyone". its likely moreso in regards to their socioeconomic band


It seems like one of those things like VR and crypto, technical solution looking for a problem to solve. After 2 years we have still not found a single good use for it and yet it is supposed to be disruptive. If you think we have, provide me with an example of one app which really has used it so well that it is now comfortably ahead of the competition.


>Don't worry about juniors. They'll still be hired because now they'll ramp up faster and produce better code

So maybe the seniors should be worried, since we/they don't have much barrier to entry that means much more competition.


If you think writing code is the most important, or even the largest part of being a (senior) software engineer - sure. In my experience it's not though. It's being able to communicate clearly, understanding and translating requirements (sometimes to code), knowing boundaries and saying no, deep understanding of systems and knowing how to debug them.

Transitioning from junior to medior (for example) is much more than writing x% better code. It's the process of falling and getting back up. Being stumped and learning when to ask for help (and not just technical, what if the spec is 'wrong'?).

I definitely worry that we are leaving future generations in the dust and that there'll be an experience gap. It's a disservice to take away something from them that we enjoyed ourselves.

No sane company should run on juniors, they're an investment.


Hm, those are good points, but many of those skills transfer from other fields. So a dev with 10 years experience is still worth a lot more than a new grad, but if somebody with 10 years experience in anything (as opposed to just software) can now be a programmer it's still a much smaller barrier.


Waiting for your direct to Amazon book about how AI will kill software OP. Always entertaining.

ChatGPT will likely be added to the list of dead things that were supposed to "kill" the software developer. I've noticed this pervasive attitude among, what I can only term as, people who actually enjoy LinkedIn. If you understand what I'm saying you can probably already picture the annoying over the top buzzword written below-the-fold post that feels like its only designed to steal braincells. ChatGPT might be able to kill the CRUD developer like WYSIWIG killed HTML programmers. There will be plenty of jobs no one wants ChatGPT to touch. Finance, medicine, and military are some I can imagine without much thinking. "No Code" is on its, what, 4th iteration and still hasn't killed programming. We are more likely to lose our jobs to overseas outsourcing than a stupid rock we tricked into thinking.

I am actually annoyed reading this Ask HN. The level of smugness reminds me of wantrepreneur bros. Woe is me I'm burned out from being so productive. Gag. I'm an actual professional developer. ChatGPT does not provide oodles of value to me. A lot of our juniors and mids use it and I often find problems with the way they copy-and-paste garbage. Admittedly, the copy-and-paste is better. However, to me it reduces to the same StackOverflow problem. Maybe if they were better "prompt engineers" (lol) they might get better output. Or they could take the 30 hours needed to figure out prompts to just simply do better at writing code.


Also other things among HN readers I have noticed recently is that they say Google search has gone incredibly bad so they use chatGPT more. But they fail to understand that one of key reasons that Google search has gone bad is due to every tom and his mom pushing out garbage seo spam with the help of chatGPT in last 2 years.


> At times it feels like we are working for ChatGPT and not the other way around.

Welcome to the future, where AI subscriptions (self or employee provided) are required for employment, with the majority of your work being management and high level input, where you guide and answering questions for the* AI.

*Probably "The" AI, since there will be one obvious choice for your problem space, which not using would put you at a severe disadvantage.

Seriously though, I've been feeling this somewhat too, lately. The "investment" part of ROI has been shifted significantly, for the "junior" side of things, where I can do "boring" things I wouldn't normally. So, I find myself doing more boring tasks, with a definite net positive outcome, but also everything negative that you described.

The problem with this is that this ROI only the "junior" end of problem space, so, I'm working on more junior problems than I was before.

I think we're somewhat proving that juniors are still needed, to take these tasks. They have been empowered the most, and will still learn and feel creative, working on these problems. More senior people won't. I understand I'm saying this from a point of extreme privilege, but I think most of us need to feel creative, and "enjoy" what we're doing. That means harder problems.

Maybe it's best to still let the juniors continue to do the junior things. There's someone out there that would love to spend all day doing what's burning you out.


    ChatGPT has the habit of throwing new knowledge back at you.
That's certainly ONE way to characterize its tendency to hallucinate APIs and operating modes out of thin air.

    I no longer feel the need to hire juniors.
You've just described how you're overworked and burning out from doing too much stuff yourself. Are you sure about that absence of need?


Don't discard the juniors, maybe ask them to process your prompts for you. That'll give you some space.

I feel the opposite: I had a great experience asking GPT-4 to do some tasks for me and have been feeling like I'm missing out ever since by not using it more often.

However, I'm wary of posting work-related code into it so I either have to come up with similar examples, which is time-consuming or ask it conceptual questions for which I haven't been able to make it much helpful. Sometimes I even noticed that a conversation with a colleague produced a much better result and it wasn't even something very specific to the project. So yeah, I feel like it's a great tool but I'm having a hard time using it productively. It definitely feels like being creative with your prompts is an important part of getting value out of it.


> Don't discard the juniors, maybe ask them to process your prompts for you. That'll give you some space.

GPT can give incorrect, bad, or non-functional code. A senior engineer that reviews GPT responses will (hopefully) spot that and rectify it right away. Junior engineers can end up being less productive and not learn a lot when encountering this.


> I'm wary of posting work-related code into it

I'm always curious when I see this. Is it about potential IP in the code? References to clients in the code? Secrets?

In my last job they were worried about it too, but decided the cons outweighed the pros. Some of our code was client-specific (CanvaMapper etc.), but we would remove brand names and then go for it.


I’m reminded of the difference between being efficient vs effective. So much of the example use cases I see people — including myself — using GPT for are unimportant short-term tasks that necessarily take away head space and time from long-term important tasks. Those long-term important tasks are the hard ones requiring existing application context where I experience LLMs struggling. If we’re not careful we’ll get DDoS’ed by the tasks that an LLM can complete at the expense of other tasks. Of course this may change in the future as things progress, but is my observation for now.


I think I get you, though I've been thinking of it rather differently.

I feel like a lot of the evergreen hype in computing is framework, practices, etc, that try to break things down into a system where any junior could then just fill in each piece and of course this always collides with the larger context problems.

Once you get to a certain point with such a system, either you have been paying attention all along or you have no idea what you've made and how to deal with a real cross cutting problem and you get to the point where the systems promise is really irrelevant, you succeed based on actual expertise you supposedly weren't going to need.

With GPT-like AI around its current level, I feel like some of these systems for breaking down programming projects are going to face an actual test now that the junior engineers to do it are some GPU costs that could be run in parallel and won't have the usual heterogeneous resources problems of testing with a real project team.

I'm not really sure if any systems will survive (or something learned in the process will make a good one) but I feel like it would be a proof of a holy grail that is suppose quite important, and just the refutation of many systems is itself a major disruption to the field.


OTOH, chatGPT can make a good rubber duck if you want to talk through a problem. It's advice is often useless (which is fine) but occasionally it may say something useful.


I find ChatGPT is scary BAD for rubber ducking. You have to already have an idea of what's right and wrong to verify. It is insane how often ChatGPT is wrong when I ask it things. like 90% of the time it is wrong. It's probably because everything I ask is way too specific and because it's just pattern matching on roids and not reasoning, it's impossible for it.


Agreed you have to be very careful. The worst case I find very often is hallucination of a library for JS that either doesn’t exist or methods that are completely fictional. My initial response of wow that’s a perfect solution turns quickly into wow what a waste of my time.


> 2. I no longer feel the need to hire juniors. This is a short-term positive and maybe a long-term negative.

The way I view it, I don't hire juniors either. I'm much rather hiring the regular admin I'll have on the team in a year or two who will take over all of the mundane stuff I currently have to handle. At that point, I don't have to ask ChatGPT for a fix, think about the fix, implement the good parts... at that point, Zabbix will just open a ticket "This is broken" and someone else will take care of it.

That takes away real workload from me, and allows them to learn a lot.

> 1. Humans work 9-5 (or some schedule), but ChatGPT is available always and works instantly. Now, when I have some idea I want to try out - I start working on it immediately with the help of AI. Earlier I just used to put a note in the todo-list and stash it for the next day.

Here, my main question would be: Why is ChatGPT special? I've burned midnight oil for an employer just with boring tools like terraform and a configuration management. They are paying me 9 - 5, and I'll work for them most effectively during that time, which at this point certainly includes ChatGPT or Copilot. But I don't really see the point of putting work in for them outside of office hours (and emergencies), regardless of the tools involved.


I'm a software engineer and I don't find much value in ChatGPT. It provides a bit of help when writing very specific and short pieces of code for languages I know but don't use often. I'd be curious to have actual data on how other SWEs use it.


I have privacy concerns with both ChatGPT and Copilot, but I also don't get the desire to use these tools on a daily basis. I am very adept at talking to a computer, and figuring out what it's doing/wants (though it may take a while). Trying to convince a language model to act as an inbetween for me just seems like a massive hassle.

It's like delegating work to a junior that is completely untrustworthy, but instead of you working to level up your junior and gain a useful coworker, you're forever stuck with the kind of dumb, needs simple things explained person.

I read posts like this and I wonder if I'm really missing out by not writing English paragraphs a lot of the time instead of code.


The type of question I used to type into google, now I ask ChatGPT. For a language I know well it tends to bring a 5 minute task down to 2 minutes. For a language I don't know well it takes a 30 minute task down to 2 minutes (One weird day the site for a framework was down and I couldn't read the docs, but ChatGPT was there with the right answer).


Yeah, and while it's true it can't really solve something you have to think about specific to your problem domain, I find it really good for common implementations. If I'm making a music app and I'm implementing shuffle, I just type

function fisherYates(arr: IPlaylistItem[]) { tab and it's done.

Similarly, I just type

  class SortedSet:
    // api has to have add, rank, cardinality, itemsinrange(make this a fluent api with min max fns returning self each time, and a get to resolve it, and items() to start it)
   // use redis
And it does the zrank, zcard zadd etc figuring out how to call the py-redis API.

Similarly I cba when learning an ORM to find out how specifically it does something by scouring it's docs, I just type SQL or natural language there or the beginnings of it atleast, and let copilot fill out the orm api for me. After which I can just hover over the functions it chained/nested together and the docs pop up and I verify that it did it right, and read any caveats they mention. I pretty much learnt how to use prisma just by seeing what copilot produced.


I plug it directly into my editor (via https://github.com/gsuuon/llm.nvim) and have it fill out code for me. I write what I want with comments and ask it to fill the rest - if it's straightforward enough it basically always works. I also get it to write commit messages (based on git diff) - though I need to improve my prompt a bit as it gets verbose and I end up rewriting it most of the time. I was working on trying to feed it things like hover and tree-sitter information before I got distracted, but that'd be another power boost as well whenever I get around to it.


I used the hell out of copilot when plugged into vscode.

I became more of an editor, less of a coder.


Perhaps I do not understand what you're actually using ChatGPT to do, but I can't see it taking over the role of junior developers anytime soon.


Yes, the post is noticeably vague about what task the poster is actually doing.


The future of being "prompt engineers" horrifies me. Prompt Engineering makes me think of Search Engine Optimisation as if it was a form of "Engineering". What I mean is that you are positioning yourself based on your "abilities" to get useful results out of some gigantic proprietary system subject to the whims of its corporate owners where the skills are at best cargo culting or at worst self delusional.

All the while, all you're really achieving is increasing the corporate masters profits at the expense of those getting left behind who don't want to participate in this wholesale devaluing of human talent across wide swathes of different careers.

I worry for my kids generation if we can't find a way for billions of people to add value again rather than squabble over the privilege of serving the decreasing few who profit from this.

Just my somewhat irrational luddite 2c worth.


> I no longer feel the need to hire juniors.

This seems to be contradicted by the text that follows.


This seems like a pretty good take - you found that you could 'get rid of juniors' by working in an unsustainable way. Why not work with now-super-productive junior employees that can spread the cognitive load?

It would seem odd if this were the one time in the history of computing where a big productivity boost didn't just lead to increasingly big/complex software.


> Why not work with now-super-productive junior employees that can spread the cognitive load?

It's all about scaling up or out. Having co-workers is like scaling out, you have to worry about aligning your goals, there's a lot of communication overhead in general. Using ChatGPT is like scaling up, you're just upgrading your skills and intelligence and you still have very low latency as there is only one mind in control.


As the OP's problems suggest, you can only scale "up" with ChatGPT to a certain point before it starts to introduce new productivity problems and even problems related to burnout and being able to properly digest and understand everything being thrown at you. With respect to teams, it also introduces risk by ensuring that larger workloads that may previously have been shared by multiple people are concentrated to a single person.


Sure - it's just like moving to a better IDE, or a higher level programming language, or a 10x faster CPU (which has happened a couple times now in my career), or a better compiler. All of those things just increased the expectations and ambitions for what a team could accomplish/manage though.


it is not clear what is it that you are actually doing but consider the possibility that there is a need to replace you by a few juniors+LLM


Do people really think they don't need juniors because your IDE has a better auto complete? ChatGPT and LLMs are very cool but I'm surprised that people think like this. It just makes your juniors more productive and you can have them do more.

Github copilot and other tools help you scale up, not out. At the end of the day teams may be smaller, but someone needs to guide it.

I'm not sure what you do, but I can't see it for most SWE jobs. These posts make me question whether people understand llms or have zero quality controls at their workplace.

It almost feels like every day people just make up wild stories that seem untrue.


Reading the responses here made me think of the slow food movement and maybe there'll be a "slow code" movement, giving us back our time to really work through problems. Already there are young people who prefer to make phone calls rather than write; the pendulum swings back. I have an older co-worker who prefers to talk on the phone, I prefer asynchronous communication, and I can imagine having to adjust even further to synchronous voice comms if we bring on a younger co-worker. My point is that fashion is not just for clothing.

Regarding ethical AI, I recommend reading this, about five people raising concerns about how LLMs are developed and used:

https://www.rollingstone.com/culture/culture-features/women-...


I use ChatGPT (with the paid GPT-4 model) for certain things when I'm stuck. I use it to explain/rephrase concepts that I'm confused about. I occasionally generate some anonymized stuff like short code snippets, devops configs, boilerplate, and tests.

Are people using it to pump out full apps and services or what? Anytime I've tried that the result quality is poor, even after lengthy explanations of what I'm asking it to build. Sure, sometimes it saves a little bit of time, but sometimes it also wastes my time by giving me nonsense or never zoning in on what I'm asking for. I don't see how people are becoming so much more "productive" with this, unless they're mostly talking about stuff like non-code written content.

It doesn't help that my company's infosec policies forbid putting any proprietary data or code into these AI tools, hence why I only ever ask for short snippets.


Im interested in specific use cases too. Anecdotally I only see people claim huge success or no success from these tools. Where are the dirty war stories? Its been out for about a year right?


> 1. Humans work 9-5 (or some schedule), but ChatGPT is available always and works instantly. Now, when I have some idea I want to try out - I start working on it immediately with the help of AI. Earlier I just used to put a note in the todo-list and stash it for the next day.

This is a time management problem and a setting boundaries problem. When I leave work, if I have an idea (work related) I jot it into a notebook to review the next day. After I leave (no later than 1630 every day) I am not obligated to work, so I don't. I exercise, read, study, spend time with my wife, play with the cats, whatever I feel like doing. If they want me to work 24x7, they can increase my pay by 20x because they'll only get a year of use out of me and I can retire with that income in a year or so without issue. They pay for 8 hours, they get 8 hours.

> 2. The outputs with ChatGPT are so fast, that my "review load" is too high. At times it feels like we are working for ChatGPT and not the other way around.

Then slow down. See my response to (1). Your time management skills are in desperate need of development. Ask less of ChatGPT. Only ask enough to complete an objective, no more. Don't ask it for information faster than you can process it. And if you feel the need to ask it a million and one questions, delegate processing its responses to others (bring back your juniors).

> 3. ChatGPT has the habit of throwing new knowledge back at you. Google does that too, but this feels 10x of Google. Sometimes it is overwhelming. Good thing is we learn a lot, bad thing is that if often slows down our decision making.

> 4. I tried to put a schedule to use it - but when everybody has access to this tech, I have a genuine fear of missing out.

FOMO is real, but like most fears it's a waste. There is no existential crisis. You are not being chased by a bear, you appear to be a professional so you have steady income you know where your next meal is coming from and have shelter. Your fear is unwarranted, even if normal. Seek out counseling or therapy to learn how to manage fear and anxiety more effectively.


> Humans work 9-5 (or some schedule), but ChatGPT is available always and works instantly.

So don't use it outside of work hours.

If you feel compelled to solve work problems outside of work hours that isn't a ChatGPT issue. It's just vanilla workaholism


Exactly. At most, make a note of it, tackle it the next day.

Work to live, don't live to work and all of that.


What are you working on that's so urgent?

The answer is probably that it's not.


I have been staying mostly away from heavy use of ChatGPT so far. I have experimented and seen the potential. I also try to keep up on the tech news. Mostly been too busy with life and work to dabble seriously here so far.

But I am still experiencing the stress nevertheless.

The kind of burn-out I am experiencing is he probably opposite of yours in a way. I have been interested in Data and AI for a while since it is related to my area of work as well. Have been meaning to do a lot of hands on earning and experimentation in this space of GenAI ever since it blew up an year ago?

(But I, by nature, usually procrastinate on things and ideas, waiting for an ideal time -- where I am free of immediate priorities and distractions -- that rarely ever comes.)

The constant onslaught of new developments and new tools and frameworks and models keeps me feeling like I am stuck in quicksand with lead boots while all these developments are whizzing past me like a train I missed. This has been a cause of stress and mild anxiety for past many months, akin to burnout or helplessness.

(A part of me secretly wishes that the current flurry is just the initial mad rush with lot of trial and error -- and the real valuable learnings and fulfillment will come to those who start a little later -- once the dust settles and others burn themselves out. One can dream.)


"new developments and new tools and frameworks and models keeps me feeling like I am stuck in quicksand with lead boots while all these developments are whizzing past me"

Exactly, even as somebody with a background in NLP, some things that I developed over the course of two-three months in Summer 2022 were promptly replicated and nicely put into accessible, open source modules by winter that anybody could run in a few lines of code, besides some bespoke bits like processing (QA on documents, etc). After a few such experiences, you begin to become wary of working on the edge, since the progress is so fast you could waste lots of time on convergent evolution. It really is hard to predict any part that won't succumb to that in the current racing stage.


I am in a similar position as you. And I feel like I am avoiding burnout by shifting the productivity ChatGPT provides me with to my own projects. I mainly work for clients as a independent contractor (web dev), initially they got A LOT more output from me, in the hours which I worked in. I stopped doing that, I actually started to reduce my hours, and gave them the same expected output. 8 hours billed usually where 4-6 hours of sitting behind my desk, now 8 hours billed are 2-4 hours. I would have expected to burn out by now, if all I did was client work; even if they paid me by my output.

Now, I have a mountain of free time, and all this free time goes into working alongside ChatGPT on my own startup. It's a non-AI B2B app, where before ChatGPT I would have needed at least 2 other people working with me. Instead its just me, 8-12 hours per day, next to my work. Its absolutely NOT a work-life balance, but the potential in pay-off which is slowly starting to become realized (2nd B2B user signed on just yesterday), off-sets any fear of a burn out at the moment.

I often feel, that people who say, that ChatGPT can't help them be productive, or makes them "10%-20% more productive", are too far ahead of their own progression curve, or just don't know how to prompt well. For me its easily a 2-5x productivity boost. I stopped talking with some friends who were constantly sending me ChatGPT memes or tricks to get AI to see weird things; its been ridiculous.

> Personally, in my early 40s, I feel my brain is back in 20s.

I am nearing my 40's now, and it feels the same.


I'm a mid-senior level engineer, and I experienced the following beginning in December, when I first start using ChatGPT:

December - January: I need to explore this brand new tool called ChatGPT. This could be as big of a game changer as Google must have been (was not in the workforce back then)

January - March: This thing is a TOTAL game changer. I can do work at a Senior+ level, when I was barely scraping by as a mid just a few months ago. I'm going to learn as much as I can as quickly as possible, using this new tool.

March - May: Uh oh. The team has new expectations of me. I'm sure I can deliver!

May - July: I don't think I can deliver. I'm working constantly and feel burnt out.

====

Since July, I've changed projects to something less stressful (I work in consulting), as well as put a hard stop to computer-related activities after work hours. I'm slowly regaining my interest in programming again, after a few months of not even want to look at code.

In retrospect, I burned out myself because of the tremendous opportunity sitting right there with ChatGPT, and also the fear that all my peers will use the tool as religiously as I would and I may be out of a job if I don't work at a breakneck pace to acquaint myself in this new tool. I wrote articles and spoke on podcasts about this new tool, just to demonstrate that I'm a (false) expert in this thing.

The lesson I learned is it's okay to be exciting for new technology, but I should pace myself. Most of my peers will take a lot longer to get up to speed, so as long as I stay abreast of trends I'll be just fine.


I've been exposed more to pipeline automation than GPT4, like argoCI/CD, and can see that taking away a lot of jobs. If customers are connected to the code repository then developers just need to push commits and argo will take care of the rest, including getting the latest code to the customer.

*This may be a bit of an oversimplification, but argo showed me that the whole pipeline can be automated


> I no longer feel the need to hire juniors. This is a short-term positive and maybe a long-term negative…. A lot of stuff I used to delegate to fellow humans are now being delegated to ChatGPT. And I can get the results immediately and at any time I want.

In college, one of the classes I took was a basic digital circuit design class. How to build a CPU out of logic gates and things of that nature. A large part of the class was about minimizing a circuit using Karnaugh maps. The professor pointed out, however, that no one in industry actually does this anymore since the introduction of ESPRESSO in the 80’s. Before ESPRESSO, any chip designer would have some junior engineers slaving over Karnaugh maps all day to optimize their designs, but these engineers were replaced overnight by an algorithm.

Did this actually reduce the number of EE’s working in chip design? Maybe it did; I wouldn’t know. And obviously, GPT-4 is a much more general system than ESPRESSO. But it struck me as an interesting parallel.


Is ChatGPT 4 significantly better than 3.5? Because I am seeing it make basic, common sense errors that a person would never make. Things like, “Only give me results that fit this certain criteria.” Or “remove any results that are about X.” It repeatedly does the same mistakes over and over again, even after I point them out. This is in addition to just getting basic facts wrong.

And that’s not even mentioning the constant milquetoast disclaimer it has to give at the end of each answer - again, which it refuses to stop saying, even when I explicitly tell it to stop.

At this point I wouldn’t trust it to make me a cup of coffee, so I am a bit baffled by these threads. I have primarily found it to be useful when you’re using it as “text calculator” to reformat text, add commas to CSVs, etc. and even then it takes a lot of trial and error.


> Is ChatGPT 4 significantly better than 3.5?

Yes. You should try it. It is genuinely useful.


Thanks, I’ll give it a try.


I am in my late 30s ;) I understand you. We are... singularity. This good feeling when ChatGPT does what I want. I am working on my pet project and struggling to convince my friends and coworkers how cool my project is, but ChatGPT understands what I am doing and is ready to help. It reminds me of my addiction to online video games: World of Tanks, Team Fortress 2, PUBG, Dead by Daylight... It is a shot of dopamine when I win and anger when I lose. Burnout after some time... can't play anymore, but I have to... Oh, I have wasted (or invested) a few full years (like when you add all hours on Steam and divide by hours in a year) in video games (instead of sleep). At least now I can feel when burnout is coming. I have learned when to stop. No magic. Just time.


Maybe try going without ChatGPT for these tasks now - eg doing them yourself first?

My guess is the marginal difference between your abilities and ChatGPT is diminishing.

My experience with ChatGPT is it’s really good at generating non-controversial answers to well known topics.

Those are only so valuable.

Not as valuable as you creative abilities.


So you're basically eliminating a bunch of jobs and are now starting a pity party on here that YOU are burned out?

As someone unemployed for a year, I feel so sorry for you. I hope you accept my tears for your burnout.


The more I am reading about and from people being productive using LLMs, the more I think that most people are just lazy slackers. Any time I try to use LLM to solve my problems or deal with my tasks - it shows at most mediocre half-baked half-assed results.

The only thing ChatGPT is good for me is to replace Google for knowledge access. But I have to validate those answers in case they are hallucinated or just plain wrong.


I have a similar issue all my life because when I think about something I just start creating it.

There is a simple solution but it requires willpower and a good dose of optimistic nihilism.

Nothing of what we do is that important, focus on living life in a way that pleases your imperfect self, not on chasing the next million dollar trend. If I count of all the chances I've missed millions on I could write a book about failure. I'm still able to live a good life so, why should I care if I have 10M in the bank? You don't need anything, you just need to be content with what you have.

Even if AI plunges the developed world into chaos, you'll still have the chance to escape on some remote mountain and farm goats. It doesn't sound too bad.

Even if we get killed by an AI piloted robot army, did you really have a chance to make a difference? Enjoy not having problems when you're not existing anymore.

Once you're past your FOMO, controlling your screen time should be enough. It's very important especially at night so you don't wreck your sleep.


Jesus. This is a kind of terrifying post. Feels like a pretty grim prediction of the world a couple of years from now as AI usage increases.


> Personally, in my early 40s, I feel my brain is back in 20s.

Software like GPT should be a tool, not a crutch. There is a lot that people can do to maintain their mental acuity over the decades, and I know first-hand the difference between doing it and not doing it. Those who do nothing, and especially those that increasingly lean on crutches, soon learn just how much worse it can get. Even a genetically lucky brain can be squandered.

Watch the Huberman Lab podcast and ask yourself if you'd like the kind of mental acuity that 48yo Dr Andrew Huberman effortlessly demonstrates every single time. His worst off days are better than most people in their prime. That's what is possible when people actually research and apply tools to improve their own health and abilities directly. There are a lot of rabbit holes this can go down, but just acknowledging it as an axis worth working on is the most important first step.


This is like that episode of Billions where they think they find the Limitless drug but it just makes them say nonsense.


Sounds like run-of-the-mill obsessive behaviors to me. My advice to you (and myself) is always enforced moderation. Whether food, alcohol, social media, etc.

I personally put limits on myself... no eating after ~9 PM, no internet after 11 PM via router block, etc. The latter I can't undo until the next morning. Works well enough.


> Calling myself a prompt-engineer sounds weird.

I'll be honest, someone calling themself this sounds to me like someone with no self respect.

> The only difference is that I can start trusting a human to improve, but I cannot expect ChatGPT to do so. Not that it is incapable, but because it is restricted by OpenAI.

Sounds like a good reason to hire juniors.


There is a lot of, let's say, whining, in different communities that chatgpt deteriorated. This may or may not be the case; I haven't noticed, but it could be. However copilot is really improving quite rapidly for us; now I rather expect it to write quite large swats of boring code for most of the team. And it does; often with an element of 'perceived magic', as in; I actually start typing something I have in my head only and it just gives me the code I was thinking of writing. Of course it gets that from context or clues, but even my colleagues with 3-4 years of experience don't quite do that and definitely not as fast.

It's great what small teams can do now because of this and the people who don't use / want to use / cannot use it will fall behind quickly.


Put the burn-out aside. It’s not time for such emotions. You are feeding data into GPT-4 and you are making it better but you are also losing everything else in the process: Your business. OpenAI now has access to how your business works, what you need, etc… It’s only a matter of time before OpenAI competes against you and they’ll win.

The only solution is to rely on open source models. The way to develop this market is to use it and pour money into it. I think everyone person, company, corporation or even government should chip in to this effort. If you pay $10 to OpenAI, you should pay $50 to a bunch of open source alternatives of your choosing.

This is the only way to make the world much less worse than what it is about to become.


>I tried to put a schedule to use it - but when everybody has access to this tech, I have a genuine fear of missing out.

The biggest fear should be missing out on life. Not some novel tech.


I am a lawyer who uses ChatGPT to assist in drafting documents. My clients are familiar with it too, but at the end of the day, clients are lazy, and would rely on the lawyer to draft something as simple as a cover letter or HR form. For those clients who pre-generate contracts and legal documents through ChatGPT, the result is astoundingly bad, such that I still had to overhaul their outputs.

Laziness and human incompetence -- these are the reasons why I exist.


My “burnout” is more like demotivation. I see a program that can do a lot or most of what I can do, and I’m like “what’s the point?” I could spend hours doing what ChatGPT can churn out in seconds… but typing out what I want ChatGPT to write, while technically more efficient than writing the code myself, is dull.

I’m exploring alternative careers and kind of want programming to just be a hobby.


I have literally no idea what you're doing with ChatGPT. Like what can it do right now except generate spam.


> ChatGPT has the habit of throwing new knowledge back at you. Google does that too, but this feels 10x of Google. Sometimes it is overwhelming. Good thing is we learn a lot, bad thing is that if often slows down our decision making.

I agree with this one. I find that spurts of work with ChatGPT can quickly exhaust my brain.


An interesting video on the GDC YouTube channel [1] on generative AI. The speaker states that generative AI doesn't raise the bar, rather it raises the floor.

[1] https://www.youtube.com/watch?v=fOqLuWml0UY


I need you to share your typical day in excruciating detail, please.


Our productivity is increased by ChatGPT but we increase workload as well. We have to work harder like our peers or competitors do.


Do we know of a company that makes money by _using_ LLMs for _something useful_ and not selling shovels during the golden rush?


> ChatGPT makes us very productive. Personally, in my early 40s, I feel my brain is back in 20s.

Is it supposed to be a good thing?


Ask HN:


Maybe buy a juicer as well? I hear that is a pretty amazing technology, you just set it and forget it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: