Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you are a Senior Developer, who is comfortable giving a Junior tips, and then guiding them to fixing them (or just stepping in for a brief moment and writing where they missed something) this is for you. I'm hearing from Senior devs all over thought, that Junior developers are just garbage at it. They product slow, insecure, or just outright awful code with it, and then they PR the code they don't even understand.

For me the sweet spot is for boilerplate (give me a blueprint of a class based on a description), translate a JSON for me into a class, or into some other format. Also "what's wrong with this code? How would a Staff Level Engineer white it?" those questions are also useful. I've found bugs before hitting debug by asking what's wrong with the code I just pounded on my keyboard by hand.





Yes, can confirm that as a senior developer who has needed to spend huge amounts of time reviewing junior code from off-shore contractors with very detailed and explicit instructions, dabbling in agentic LLM coding tools like Claude Code has felt like like a gift from heaven.

I also have concerns about said junior developers wielding such tools, because yes, without being able to supply the right kind of context and being able to understand the difference between a good solution and a bad solution, they will produce tons of awful, but technically working code.


Totally agree with the off-shore component of this. I'm already going to have to break a task down into clear detail and resolve any anticipated blocker myself upfront to avoid multi-timezone multi-day back and forth.

Now that I'm practiced at that, the off-shored part is no longer valuable


The unemployment in India is going to be catastrophic. Geopolitical.

Many companies that see themselves as non-technical at the core prefer building solutions with an army of intermediate developers that are hot swappable. Having highly skilled developers is a risk for them.

Unlikely. Microsoft had layoffs everywhere except India. There they keep hiring more. As song as the can keep upskilling themselves while still being much cheaper than US workers they won't fear unemployment.

Just yesterday I saw on X a video of a Miami hotel where the check-in procedure was via a video call to a receptionist in India.


Six months from now, that singular worker if they are still employed, will manage a high number of receptionist avatars. And then they themselves will be replaced. It will still lead to a massive collapse in the labor market and with all of that excess labor, existing jobs while being overworked will still see flat to decreasing wages.

You overestimate how chatbots can replace people.

Most people underestimate how strongly capital wants to displace labor, even if the outcomes are demonstrably worse. Esp in a captured scenario like hotel reception, you have already booked, you aren't going anywhere else.

If that were true, automatic coffee vending machines would have made baristas unemployed a long time age.

You know senior developers can also be off-shored, right?

Blowing away the junior -> senior pipeline would, on average, hit every country the same.

Though it raises an interesting point: if a country like India or China did make the investment in hiring, paying, and mentoring junior people but e.g. the US didn't, then you could see a massive shift in the global center of gravity around software expertise in 10 years (plus or minus).

Someone is going to be the best at planning for and investing in the future on this, and someone is going to maximally wishful thinking / short-term thinking this, and seductive-but-not-really-there vibe coding is probably going to be a major pivot point there.


This is such an important point. Not sure about India, which is still very market forces driven, but china can just force its employers to do whatever is of strategic importance. That’s long gone in the US. Market forces here will only ever optimize for short term game, shooting ourselves in the chest.

This.

I've got myself in a PILE of trouble when trying to use LLMs with languages/technologies I am unfamiliar with (React, don't judge me).

But with something that I am familiar with (say Go, or Python) LLMs have improved my velocity massively, with the caveat that I have had to explicitly tell the LLM when it is producing something that I know that I don't want (me arguing with an LLM was an experience too!)


Ah mate I can’t relate more to the offshore component. I had a very sad experience where I recently had to let go of an offshore team due to them providing devs that essentially ‘junior with copilot’ but labelled as a ‘senior’.

Time and time again I would find telltale signs of dumping LLM output into PRs n then claiming it as their own. Not a problem, but the code didn’t do what the detailed ticket asked and introduced other bugs as a result.

It ultimately became a choice of ‘go through the hassle of making a detailed brief for it to just be put in copilot verbatim and then go through the hassle of reviewing it and explaining the issues back to the offshore dev’ or ‘brief Claude directly’

I hate to say it but from a business perspective the latter won outright. It tears me up as it goes against my morality.


Then why did you do it?

Why does it go against your morality? Sounds like a totally rational business decision, only affecting a sub-par partner

I know what you mean it just feels a bit inhumane to me. Sort of like defining a value for a living being and then determining that they fell beneath said value.

Yeah well, at some point some one higher up without scruples about such moral issues will make that decision for you...

> I'm hearing from Senior devs all over thought, that Junior developers are just garbage at it. They product slow, insecure, or just outright awful code with it, and then they PR the code they don't even understand.

If this is the case then we better have full AI generated code within the next 10 years since those "juniors" will remain atrophied juniors forever and the old timers will be checking in with the big clock in the sky. IF we, as a field, believe that this can not possibly happen, then we are making a huge mistake leaning on a tool that requires "deep [orthogonal] experience" to operate properly.


You can't atrophy if you never grew in the first place. The juniors will be stunted. It's the seniors who will become atrophied.

As for whether it's a mistake, isn't that just the way of things these days? The current world is about extracting as much as you can while you're still here. Look around. Nobody is building for the future. There are a few niche groups that talk about it, but nobody is really doing it. It's just take, take, take.

This just seems more of the same, but we're speeding up. We started by extracting fossil fuels deposited over millions of years, then extracting resources and technology from civilisations deposited over millennia, then from the Victorians deposited only a century or two ago, and now it's software deposited over only mere decades. Someone is going to be left holding the bag, we just hope it's not us. Meanwhile most of the population aren't even thinking about it, and most of the fraction that do think are dreaming that technology is going to save us before it's payback time.


Sad, but very true observation

IT education and computer science (at least part of it) will need a stronger focus on software engineering and software architecture skills to teach developers how to be in control of an AI dev tool.

The fastest way is via struggle. Learn to do it yourself first. Understand WHY it does not work. What's good code? What's bad code? What are conventions?

There are no shortcuts - you are not an accountant just because you have a calculator.


With that mindset you don't have to go to school, you could learn everything through struggle... Ideally it's a bit of both, you need theory and experience to succeed.

And talent.

Brains are not computers and we don't learn by being given abstract rules. We also don't learn nearly as well from class room teaching as we do from doing things IRL for a real purpose - the brain always knows the difference and that the (real, non-artificially created) stakes are low in a teaching environment.

That's also the huge difference between AI and brains: AI does not work on the real world but on our communication (and even that is limited to text, missing all the nuance or face to face communication includes). The brain works based on sensor data from the real world. The communication method, language, is a very limited add-on on top of how the brain really works. We don't think in language, to do even some abstract language based thinking, e.g. when doing formal math, requires a lot of concentration and effort and still uses a lot of "under the hood" intuition.

That is why even with years of learning the same curriculum we still need to make a significant effort for every single concrete example to "get everyone on the same page", creating compatible internal models under the hood. Everybody's own internal model of even simple things are slightly different, depending on what brain they brought to learning and what exactly they learned, where even things like social classroom interactions went into how the connections were formed. Only based on a huge amount of effort can we then use language to communicate in the abstract, and even then, when we leave the central corridor of ideas people will start arguing forever about definitions. No matter how the written text is the same, the internal model is different for every person.

As someone who took neuroscience, I found this surprisingly well written:

"The brain doesn't like to abstract unless you make it"

http://howthebrainworks.science/how_the_brain_works_/the_bra...

> This resource, prepared by members of the University of London Centre for Educational Neuroscience (CEN), gives a brief overview of how the brain works for a general audience. It is based on the most recent research. It aims to give a gist of the brain’s principles of function, covering the brain’s evolutionary origin, how it develops, and how it copes in the modern world.

The best way to learn is to do things IRL that matter. School is a compromise and not really all that great. People motivated by actual need often can learn things that take years in school with middling results significantly faster and with better and deeper results.


Yeah. The only, and I mean only non-social/networking advantages to universities stem from forced learning/reasoning about complex theoretical concepts that form the requisite base knowledge to learn the practical requirements of your field while on the job.

Trade schools and certificate programs are designed to churn out people with journeyman-level skills in some field. They repeatedly drill you on the practical day-in-day-out requirements, tasks, troubleshooting tools and techniques, etc. that you need to walk up to a job site and be useful. The fields generally have a predictable enough set of technical problems to deal with that a deep theoretical exploration is unnecessary. This is just as true for electricians and auto mechanics as it is for people doing limited but logistically complex technical work, like orchestrating a big fleet of windows workstations with all the Microsoft enterprise tools.

In software development and lots of other fields that require grappling with complex theoretical stuff, you really need both the practical and the theoretical background to be productive. That would be a ridiculous undertaking for a school, and it’s why we have internships/externships/jr positions.

The combination of these tools letting the seniors in a department do all of the work so companies don’t have to invest in interns/juniors so there’s no reliable entry point into the field, and there being an even bigger disconnect between what schools offer and the skills they need to compete, the industry has some rough days ahead and a whole lot of people trying to get a foothold in the industry right now are screwed. I’m kind of surprised how little so many people in tech seem to care about the impending rough road for entry-level folks in the industry. I guess it’s a combination of how little most higher level developers have to interact with them, and the fact that everybody was tripping over themselves to hire developers when a lot of seniors joined the industry.


It's not a particularly moral way to think, but if you're currently mid level or senior, the junior dev pipeline being cut off will be beneficial to you personally in a few years' time.

Potentially very beneficial, if it turns out software engineers are still needed but nobody has been training them for half a decade


It’s clear that it harms those that get to keep their jobs less to some extent (though when you’ve got a glut of talent and few jobs, the only winners are employers because salaries tank eventually.) But frankly, the pervasiveness of that intense greed and self-absorption used to be anathema to the American software industry. Now it looks a lot more like a bunch of private equity bros than a bunch of people who stood to make good money selling creative solutions to the world’s problems. Even worse, the developers that built this business still think they’re part of the in-club, and too special and talented to get tossed out like a bag of moldy peaches. They’re wrong, and it’s sad to watch.

And that is the best thing about AI, it allows you to do and try so much more in the limited time you have. If you have an idea, build it with AI, test it, see where it breaks. AI is going to be a big boost for education, because it allows for so much more experimentation and hands-on.

By using AI, you learn how to use AI, not necessarily how to build architecturally sound and maintainable software, so being able to do much more in a limited amount of time will not necessarily make you a more knowledgeable programmer, or at least that knowledge will most likely only be surface-level pattern recognition. It still needs to be combined with hands-on building your own thing, to truly understand the nuts and bolts of such projects.

If you end up with a working project where you understand all the moving parts, I think AI is great for learning and the ultimate proof whether the learning was succesful if whether you can actually build (and ship) things.

So human teachers are good to have as well, but I remember they were of limited use for me when I was learning programming without AI. So many concepts they tried to teach me without having understood themself first. AI would have likely helped me to get better answers instead of, "because that is how you do it" when asking why to do something in a certain way.

So obviously I would have prefered competent teachers all the time and also now competent teachers with unlimited time instead of faulty AIs for the students, but in reality human time is limited and humans are flawed as well. So I don't see the doomsday expectations for the new generation of programmers. The ultimate goal, building something that works to the spec, did not change and horrible unmaintainable code was also shipped 20 years ago.


I don't agree, to me switching from hand coded source code to ai coded source code is like going from a hand-saw to an electric-saw for your woodworking projects. In the end you still have to know woodworking, but you experiment much more, so you learn more.

Or maybe it's more like going from analog photography to digital photography. Whatever it is, you get more programming done.

Just like when you go from assembly to c to a memory managed language like java. It did some 6502 and 68000 assembly over 35 years ago, now nowbody knows assembly.


> to me

Key words there. To you, it's a electric saw because you already know how to program, and that's the other person's point; it doesn't necessarily empower people to build software. You? Yes. Generally though when you hand the public an electric saw and say "have at it, build stuff" you end up with a lot of lost appendages.

Sadly, in this case the "lost appendages" are going to be man-decades of time spent undoing all the landmines vibecoders are going to plant around the digital commons. Which means AI even fails as a metaphorical "electric saw", because a good electric saw should strike fear into the user by promising mortal damage through misuse. AI has no such misuse deterrent, so people will freely misuse it until consequences swing back wildly, and the blast radius is community-scale.

> more like going from analog photography to digital photography. Whatever it is, you get more programming done.

By volume, the primary outcome of digital photography has been a deluge of pointless photographs to the extent we've had to invent new words to categorize them. "selfies". "sexts". "foodstagramming". Sure, AI will increase the actual programming being done, the same way digital photography gave us more photography art. But much more than that, AI will bring the equivalent of "foodstagramming" but for programs. Kind of like how the Apple App Store brought us some good apps, but at the same time 9 bajillion travel guides and flashlight apps. When you lower the bar you also open the flood gates.


Your last point is also something that happened when the big game engines such as Unity became free to use. All of a sudden, Steam Greenlight was getting flooded with gems such as "potato peeling simulator" et al. I suppose it is just a natural side effect of making things more accessible.

Being able to do it quicker and cheaper will often ensure more people will learn the basics. Electrical tools open up woodworking to more people, same with digital photography, more people take the effort to learn the basics. There will also be many more people making rubbish, but is that really a problem?

With ai it’s cheap and fast for a professional to ask the AI: what does this rubbish software do, and can you create me a more robust version following these guidelines.


> With ai it’s cheap and fast for a professional to ask the AI: what does this rubbish software do, and can you create me a more robust version following these guidelines.

This falls apart today with sufficiently complex software and also seems to require source availability (or perfect specifications).

One of the things I keep an eye out for in terms of "have LLMs actually cracked large-product complexity yet" (vs human-overseen patches or greenfield demos) is exactly that sort of re-implementation-and-improvement you talk about. Like a greenfield Photoshop substitute.


> Sadly, in this case the "lost appendages" are going to be man-decades of time spent undoing all the landmines vibecoders are going to plant around the digital commons.

Aren't you being overly optimistic that these would even get traction?


Pessimistic, but yeah. It's just my whole life has been a string of the absolute worst ideas being implemented at scale, so I don't see why this would buck the trend.

> By using AI, you learn how to use AI, not necessarily how to build architecturally sound and maintainable software

> will not necessarily make you a more knowledgeable programmer

I think we'd better start separating "building software" from programming, because the act of programming is going to continue to get less and less valuable.

I would argue that programming has been very overvalued for a while even before AI. And the industry believes it's own hype with a healthy dose of elitism mixed in.

But now AI is removing the facade and it's showing that the idea and the architecture is actually the important part, not the coding if it.


I find it super ironic that you talk about "the industry believing its own hype" and then continue with a love letter for AI.

Ok. But most developers aren't building AI tech. Instead, they're coding a SPA or CRUD app or something else that's been done 10000 times before, but just doing it slightly differently. That's exactly why LLMs are so good at this kind of (programming) work.

I would say most people are dealing with tickets and meetings about the tickets more than they are actually spending time with their editor. It may be similar, but that 1 percent difference needs to be nailed down right, as that's where the business lifeline lays.

Also, not all dev jobs are web tech or AI tech.


Unfortunately education everywhere is getting really hurt by access to AI, both from students who are enabled to not their homework, and by teacher review/feedback being replaced by chatbots.

In Germany, software engineer is a trade you go to trade school for three years while working in a company in parallel. I don't think that IT education and computer science in universities should have a stronger focus on SE as universities are basically a trade school for being a researcher.

While that path exists, the vast majority of developers don't go through that path.

Yes, but it is more of a cultural thing than anything else. Studying computer science to be a software developer* is like studying mechanical engineering to be a machine operator.

* except if you are developing complicated algorithms or do numeric stuff. However, I believe that the majority of developers will never be in such a situation.


A software degree or a CS degree with a more applied focus will teach you way better than the trade schools will. It'd be nice if that weren't the case, but from all I've seen it is.

So you end up in that weird spot where it would work very well for someone with a strong focus on self-learning and a company investing into their side of the training, but at that point you could almost skip the formal part completely and just start directly, assuming you have some self-taught base. Or work part-time will studying on the side, and get the more useful degree that way. Plenty places will hire promising first-year uni students.


Yeah I noticed the issue with more Junior developers right away. Some developers, Junior or not, have yet to be exposed to environments where their PRs are put under HEAVY scrutiny. They are used to loosey-goosey and unfortunately they are not prepared to put LLM changes under the level of scrutiny they require.

The worst is getting, even smallish, PRs with a bunch of changes that look extraneous or otherwise off. After asking questions the code changes without the questions being answered and likely with a new set of problems. I swear I've been prompting an LLM through an engineer/PR middleman :(


Our offshore devs keep doing this and it drives me nuts. No answers to my question, completely different code gets pushed.

  > No answers to my question, completely different code gets pushed.
at what point does that cost become higher than benefit?

It does but most execs dont care about long term they want that perf bonus before they leave in 3-4 years

That is how you get Oracle source code. It broke my illusions after entering real life big company coding after university, many years ago. It also led to this gem of an HN comment: https://news.ycombinator.com/item?id=18442637

Yes, this!

A couple of week ago, I had a little down time and thought about a new algorithm I wanted to implement. In my head it seemed simple enough that 1) I thought the solution was already known, and 2) it would be fairly easy to write. So I asked Claude to "write me a python function that does Foo". I spent a whole morning going back and forth getting crap and nothing at all like what I wanted.

I don't know what inspired me, but I just started to pretend that I was talking to one one of my junior engineers. I first asked for a much simpler function that was on the way to what I wanted (well, technically, it was the mathematical inverse of what I wanted), then I asked it to modify it to add one different transform, and then another, and then another. And then finally, once the function was doing what I wanted, I asked it to write me the inverse function. And it got it right.

What was cool about it, is that it turned out to be more complex linear algebra and edge cases than I originally thought, and it would have been weeks for me to figure all of that out. But using it as a research tool and junior engineer in one was the key.

I think if we go down the "vibe coding" route, we will end up with hoards of juniors who don't understand anything and the stuff they produce with AI will be garbage and brittle. But using AI as a tool is starting to feel more compelling to me.


The LLM will never admit it doesn't have a clue what's going on, but over time you develop a sense of when it's onto something and when it's trapped in a loop of plausible sounding nonsense

Edit: Also, it's funny how often you can get it to improve its output by just saying "this looks kind of bad for x reason, there must be a way to make it better"


I have experimented with instructing CC to doubt itself greatly and presume it is not validating anything properly.

It caused it to throw out good ideas for validation and working code.

I want to believe there is some sweet spot.

The constant “Aha!” type responses followed by self validating prose that the answer is at hand or within reach can be intoxicating and can not be trusted.

The product is also seemingly in constant flux of tuning, where some sessions result in great progress, others the AI seems as if it is deliberately trying to steer you into traffic.

Anthropic is alluded toward this being the result of load. They mentioned in their memo about new limits for Max users that abuse of the subscription levels resulted in ~subpar product experiences. It’s possible they meant response times and the overloaded 500 responses or lower than normal TPS, but there are many anecdotal accounts of CC suddenly having a bad day from “longtime” users, including myself.

I don’t understand how load would impact the actual model’s performance.

It seems like only load based impacts on individual session context would result in degraded outputs. But I know nothing of serving LLM at scale.

Can anyone explain how high load might result in an unchanged product performing objectively worse?


“ we will end up with hoards of juniors who don't understand anything and the stuff they produce”

I’ve spent a lot of my career cleaning up this kind of nonsense so the future looks bright


An observation. If we stipulate that this is true that a 'senior developer' benefits from Claude Code but a junior developer do not. Then I'm wondering if that creates this gap where you have a bunch of newly minted '10x' engineers who are doing the work that a bunch of junior devs helped with, and now you're not training any new junior devs because they are unemployable. Is that correct?

It already was the case wasn't it, that you could either get one senior dev to build your thing in a week, or give them a team of juniors and it would take the whole team 4 weeks and be worse.

Yet somehow companies continued to opt for the second approach. Something to do with status from headcount?


>> Something to do with status from headcount?

And usually projected as ensuring bus factor > 1


Yes, there are companies that opt for broken organizations for a variety of reasons. The observation though is this; Does this lead to a world where the 'minimum' programmer is what we consider today to be a 'Senior Dev' ? It echoes the transition of machinists to operators of CAD/CAM workstations to operate machining centers, rather than hands on the dials of a mill or lathe. It certainly seems like it might make entering the field through a "coder camp" would no longer be practical.

It'll be interesting to see if in a decade when a whole cohort of juniors didn't get trained whether LLMs will be able to do the whole job. I'm guessing a lot of companies are willing to bet on yes.

The issue is there's a kind of prisoner's dilemma going on - probably some people can see that there's a serious risk of still needing software engineers in 10 years' time and there not being enough because nobody is training juniors in 2025.

However, noticing this doesn't help because if you invest in training juniors in 2025 but nobody else does, someone else can just recruit them in 2030 and benefit from your investment


Yes exactly if workers can just up and leave and treat the job transactionally, that creates a race to the bottom. Workers have to train themselves then.

As long as LLMs aren't the 'Cold Fusion' of this cycle, sure. :-)

“Wasting” effort on juniors is where seniors come from. So that first approach is only valid at a sole proprietorship, at an early stage startup, or in an emergency.

n=1 but my experience is the ratio of what'd I'd class "senior" devs (per the example given) to everyone else is comfortably 10:1.

Do you mean that for every 11 devs, 10 of them are "senior" as per the example? Or that only 1 is?

1 senior to 10 everyone else. Sorry that wasn't super clear.

I'm getting my moneys worth having claude write tools. We've reached the dream where I can vibe out some one off software and it's great; today I made two different (shitty but usable!) gui programs in seconds that let me visually describe some test data. The alternative was probably half an hour of putting something together if my first idea was good. Then I deleted them and moved on.

It still writes insane things all the time but I find it really helpful to spit out single use stuff and to brainstorm with. I try to get it to perform tasks I don't know how to accomplish (eg. computer vision experiments) and it never really works out in the end but I often learn something and I'm still very happy with my subscription.


I've also found it good at catching mistakes and helping write commit messages.

"Review the top-most commit. Did I make any mistakes? Did I leave anything out of the commit message?"

Sometimes I let it write the message for me:

"Write a new commit message for the current commit."

I've had to tell it how to write commit messages though. It likes to offer subjective opinions, use superlatives and guess at why something was done. I've had to tell it to cut that out: "Summarize what has changes. Be concise but thorough. Avoid adjective and superlatives. Use imperative mood."


This is insane to me.

Review your own code. Understand why you made the changes. And then clearly describe why you made them. If you can't do that yourself, I think that's a huge gap in your own skills.

Making something else do it means you don't internalize the changes that you made.


Your comment is not a fair interpretation of what I wrote.

For the record, I write better and more detailed commit messages than almost anyone I know across a decades[^0] long career[^1,^2,^3,^4,^5]. But I'm not immune from making mistakes, and everyone can use an editor, or just runs out of mental energy. Unfortunately, I find it hard to get decent PR reviews from my colleagues at work.

So yeah, I've started using Claude Code to help review my own commits. That doesn't mean I don't understand my changes or that I don't know why I made them. And CC is good at banging out a first draft of a commit message. It's also good at catching tiny logic errors that slip through tests and human review. Surprisingly good. You should try it.

I have plenty of criticisms for CC too. I'm not sure it's actually saving me any time. I've spent the last two weeks working 10 hour days with it. For some things it shines. For other things, I would've been better off writing the code from scratch myself, something I've had to do maybe 40% of the time now.

[^0]: https://seclists.org/bugtraq/1998/Jul/172

[^1]: https://github.com/git/git/commit/441adf0ccf571a9fe15658fdfc...

[^2]: https://github.com/git/git/commit/cacfc09ba82bfc6b0e1c047247...

[^3]: https://github.com/fastlane/fastlane/pull/21644

[^4]: https://github.com/CocoaPods/Core/pull/741

[^5]: None of the these are my best examples, just the ones I found quickly. Most of my commit messages are obviously locked away by my employer. Somewhere in the git history is a paragraphs long commit message from Jeff King (peff) explaining a one line diff. That's probably my favorite commit message of all time. But I also know that at work I've got a message somewhere explaining a single character diff.


What I can recommend is to tell it that for all documentation, readmes and PR descriptions to keep it "tight, no purple-prose, no emojis". That cuts everything down nicely to to-the-point docs without GPTisms and without the emoji storm that makes it look like yet another frontend framework Readme.

How long are you commit messages if you still are ahead after typing all this prompt?

My commits’description part, if warranted, is about the reason for the changes, not the specificity of the solution. It’s a little memo to the person reading the diff, not a long monograph. And the diff is usually small.

This is a good call! Do you have claude use atomic commits or do you manually copy/paste the output?

Saving your summary instructions as a CLAUDE.md


I'll tell it to commit or amend depending upon the situation. Then I'll open the message in my editor and revise what it wrote.

Can also confirm. Almost any output from claude code needs my careful input for corrections, which you could only spot and provide if you have experience. There is no way a junior is able to command these tools because the main competency to use them correctly is your ability to guide and teach others in software development, which by definition is only possible if you have senior experience in this field. The sycophancy provided by these models will outright damage the skill progression for juniors, but on the other hand there is no way to not use them. So we are in a state where the future seems really uncertain for most of us.

I find the "killer app" right now is anything where you need to integrate information you don't already have in your brain. A new language or framework, a third-party API, etc. Something straightforward but foreign, and well-documented. You'll save so much time because Claude has already read the docs



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: