Hacker Newsnew | past | comments | ask | show | jobs | submit | greybox's commentslogin

Anyone who uses LLMS should suck on my ballsack


You're a principle engineer who doesn't see the value in training juniors ...


I did not say that I don't see the value in training juniors. I said that I don't have a need for them anymore. I can teach Claude in 1 API call what takes a day to walk a junior through.

Furthermore, I think we are going to find less and less work for Juniors to do because Seniors are blasting through code at a faster and faster pace now.

I'm not the only one saying that the entry level market is already getting trashed...


I don't think that's what OP is saying at all.

There's a reality to content with here. We all know that software developers have been coming out of school with decidedly substandard skills (and I am being very kind). In that context, the value they might add to an organization has almost always been negative. Meaning that, without substantial training and coaching --which costs time, money and market opportunity-- they can be detrimental to a business.

Before LLM's you had no options available. With the advent of capable AI coding tools, the contrast between hiring an person who needs hand-holding and significant training and just using AI is significant and will be nothing less than massive with the passage of time.

Simply put, software development teams who do not embrace a workflow that integrates AI will not be able to compete with those who do. This is a business forcing function. It has nothing to do with not being able to or not wanting to train newcomers (or not seeing value in their training).

People wanting to enter the software development field in the future (which is here now), will likely have to demonstrate a solid software development baseline and equally solid AI-co-working capabilities. In other words, everyone will need to be a 5x or 10x developer. AI alone cannot make you that today. You have to know what you are doing.

I mean, I have seen fresh university CS graduates who cannot design a class hierarchy if their life depended on it. One candidate told me that the only data structure he learned in school was linked lists (don't know how that's possible). Pointers? In a world dominated by Python and the like, newbies have no clue what's going on in the machine. Etc.

My conclusion is that schools are finally going to be forced to do a better job. It is amazing to see just how many CS programs are just horrible. Sure, the modules/classes they take have the correct titles. What and how they teach is a different matter.

Here's an example:

I'll omit the school name because I just don't want to be the source of (well-deserved, I might add) hatred. When I interviewed someone who graduated from this school, I came to learn that a massive portion of their curriculum is taught using Javascript and the P5js library. This guy had ZERO Linux skills --never saw it school. His OOP class devoted the entire semester to learning the JUCE library...and nobody walked out of that class knowing how to design object hierarchies, inheritance, polymorphism, etc.

Again, in the context of what education produces as computer scientists, yes, without a doubt, AI will replace them in a microsecond. No doubt about it at all.

Going back to the business argument. There is a parallel:

Companies A, B and C were manufacturing products in, say, Europe. Company A, a long time ago, decides they are brilliant and moves production to China. They can lower their list price, make more money and grab market share from their competitors.

Company B, a year later, having lost 25% of their market share to company A due to pricing pressure, decides to move production to China. To gain market share, they undercut Company A. They have no choice on the matter; they are not competitive.

A year later A and B, having engaged in a price war for market share, are now selling their products at half the original list price (before A went to China). They are also making far less money per unit sold.

Company C now has a decision to make. They lost a significant portion of market share to A and B. Either they exit the market and close the company or follow suit and move production to China.

At this point the only company one could suggest acted based on greed was A during the initial outsourcing push. All decisions after that moment in time were about market survival in an environment caused by the original move.

Company C decides to move production to China. And, of course, wanting to regain market share, they drop their prices. Now A, B and C are in a price war until some form of equilibrium is reached. The market price for the products they sell are now one quarter what they were before A moved to China. They are making money, but it is a lot tighter than it used to be. All three organizations had serious reorganization and reductions in the labor force.

The AI transition will follow exactly this mechanism. Some companies will be first movers and reap short-term benefits of using AI to various extents. Others will be forced into adoption just to remain competitive. At the limit, companies will integrate AI into every segment of the organization. It will be a do or die scenario.

Universities will have to graduate candidates who will be able to add value in this reality.

Job seekers will have to be excellent candidates in this context, not the status quo ante context.


This sounds like “You are a manager who doesn’t see the value in training typists” or “You are a refrigerator seller who doesn’t see the value in training icemen.”


This has got to be a massive factor.

What shocked me about the US when I went, was how much peptobismol people chugged down. There was not one meal in my 1 week stay there that I could digest without issue.


Annual sales of Pepto Bismol look to be well under $0.50 per person, so while the American diet and food quality is appallingly worse than Europe, I suspect your one week of stomach upset is not be a great source from which to extrapolate.


Fair enough, maybe it was just the amount and variety of peptobismol products that I noticed were for sale everywhere. For example just in the Hotel where I was staying, they had a bunch chewable peptobismol gummy bears + the ordinary bottles for sale.


> maybe it was just the amount and variety of peptobismol products that I noticed were for sale everywhere.

Just wait until you see the breakfast cereal aisle


Or the chips/pretzels aisle which is often different from the cookies aisle which is often different from the candy and chocolate aisle…


Eating food and drinking water in a strange place can upset your stomach even if the people who live there all the time are fine. This is a very well-known phenomenon.


This is why I cook at home a vast majority of the time. It also depends where in the country you are. You could easily eat healthy in LA, NYC, SF, Chicago, etc. but if you’re outside the major cities you can find it a lot harder.


This is cool, I always like to see local-first services on Mobile.

All of these features are great, and it's really cool that I can have them run locally on my phone, but what I feel is really missing on the Mobile App market right now is AI cross app automation.

I'm not sure it's possible with IOS right now though.

For instance: I went to france earlier this year, and before I went I saw this french vocabulary sheet that was just an image. I wanted to get the text from the image, format it into a csv with english in column 1 and french in column 2 and then export it to anki, which is also on my phone. AnkiDriod has this feature, and Gemini can deffinitely do OCR, translation and text formatting, however I had to do all the awkward app switching and input myself! This is the bit I wanted automated


In my knowledge domain, that's actually possible on iOS, but it will cause more privacy issues for sharing data between different apps, and also the clipboard issue. Apple has been pretty restrictive about inter-app communication for good reasons, but it does limit these kinds of automated workflows.

As you mentioned with that situation, I think Apple and Google may have better power to optimize the OS system if we only do it on the local device. They have the deep system-level access that would make this kind of seamless automation work without the privacy and security trade-offs that third-party apps face.


All it can do is reproduce text, if you hook it up to the launch button, thats on you


Modern "coding assistant" AIs already get to write code that would be deployed to prod.

This will only become more common as AIs become more capable of handling complex tasks autonomously.

If your game plan for AI safety was "lock the AI into a box and never ever give it any way to do anything dangerous", then I'm afraid that your plan has already failed completely and utterly.


If you use it for a critical system, and something goes wrong, youre still responsible for the consequences.

Much like if I let my cat walk on my keyboard and it brings a server down.


And?

"Sure, we have a rogue AI that managed to steal millions from the company, backdoor all of our infrastructure, escape into who-knows-what compute cluster when it got caught, and is now waging guerilla warfare against our company over our so-called mistreatment of tiger shrimps. But hey, at least we know the name of the guy who gave that AI a prompt that lead to all of this!"


It seems like the answer is to not use it then.

That would be bad for all those investors though. It's your choice I guess.

Look if your evil number 57, you'd better not use the random number generator.


Good luck convincing everyone "to not use it" then.


It's not my job to convince anyone, all I have to be is the only person who does their job reliably and then watch the dollars roll in


Im very happy to finally see this happen. It's so dangerous to centralize our digital services in the United States.


But are they really not using Azure Europe?


Azure Europe is one truth social post removed from a shutdown.


That’s a dramatic oversimplification. Azure Europe runs in EU datacenters under EU laws. Microsoft’s EU Data Boundary limits access (even for US staff) It’s not as fragile as one political post bringing the whole thing down.


Ofcourse it is.

The IT of the ICJ on MS 365 got shut off... After a truth social post.

EU law means nothing when the employees are on the next plain to Washington. And when the funding and expertise and infrastructure is tightly controlled by USA entities, who'm react severely to the posts of the commander in Chief. (As would I)


Inaccurate.

The ICJ email issue was tied to a support contract suspension.

If the EU wants full independence, that’s a fair goal, but we should be clear about what's actually happened versus what feels like it could happen.


I sense we won't come to a consensus.

It's not what the EU wants at all. That assumption is the root of your arguments, but it is wrong.

The EU wasn't created to shaft America or anyone else.

In fact, the reason so many US services and companies were doing business in the EU was because the USA had a stellar reputation as an ally and as a society.

But at this point, after the ICJ, Greenland annexation, weapon kill switches, White house office ambushes, hostile tarrifs, and all the other drama and threats and coercion, arrests of EU citizens etc.. it's a theme that the EU no longer can trust nor rely on the US. The US only cares for itself, not any friends and allies.

Put simply: The USA wouldn't host it's federal websites on Alibaba Cloud. And the US isn't a trusted reliable friend and ally anymore, as regrettable as that is, it means a pivot away from relying on anything US is necessary. And common sense. To anyone not drinking the cool aid. ;)


I don’t disagree that trust is at the core of this shift. Sovereignty efforts are a response to broader geopolitical dynamics not just cloud tech choices. But framing it as the EU cutting ties out of betrayal or drama misses the point. It's about strategic independence. This is like how the US wouldn't outsource core infrastructure to Alibaba. That's not hostility. It’s basic statecraft.


I appreciate your position, but I think it's mainly one in hindsight, 20/20 etc..

The reality is that the EU has had plans for strategic autonomy in case of necessity, for a longer time. But hasn't enacted them because the US was a trusted partner. And this has been the balance of things since the 2nd world war, so roughly 85 years.

The reality now is upside down: In most of the EU the US now has a reputation on par with Russia and China, and the theme is to enact strategic autonomy as soon as possible.

This is a tipping point, because up until now the US enjoyed the position of uncontested dominance, backed by a multiplier equal it's economic weight and global influence etc.. This is no longer self-evident, because of reasons, but at the end of the day this is due to decisions Americans made and which their children will also have to live with.

I genuinely wonder if this isn't the decade in which the US shot itself in the head and crippled itself for the next 85 years. For basically no reason, other than self-interested and self-enriching politics.

It's literally incredible how much hard work and effort was thrown out in the span of two years, which took hundreds of years to accumulate. And I don't think the second time roud will be any easier or quicker.


He's talking about "LLM Utility companies going down and the world becoming dumber" as a sign of humanity's progress.

This if anything should be a huge red flag


Replace with "Water Utility going down and the world becoming less sanitary", etc. Still a red flag?


You're making leap of logic.

Before water sanitization technology we had no way of sanitizing water on a large scale.

Before LLMs, we could still write software. Arguably we were collectively better at it.


LLMs are general-purpose tools used for great many tasks, most of them not related to writing code.


He lives in a GenAI bubble where everyone is self-congratulating about the usage of LLMs.

The reality is that there's not a single critical component anywhere that is built on LLMs. There's absolutely no reliance on models, and ChatGPT being down has absolutely no impact on anything beside teenagers not being able to cheat on their homeworks and LLM wrappers not being able to wrap.


Adults everywhere are using it to "cheat" at work, except there it's not cheating, it's celebrated and welcomed as a performance enhancement because results are the only thing that matter, and over time that will result in new expectations for productivity.

It's going to take a while for those new expectations to develop, and they won't develop evenly, just like how even today there's plenty of low-hanging fruit in the form of roles or businesses that aren't using what anyone here would identify as simple opportunities for automation, and the main benefit that accrues to the one guy in the office who knows how to cheat with Excel and VBA is that he gets to slack off most of the time. But there certainly are places where the people in charge expect more, and are quick to perceive when and how much that bar can be raised. They don't care if you're cheating, but you'll need to keep up with the people who are.


> The reality is that there's not a single critical component anywhere that is built on LLMs.

Remember that there are billion dollar usecases where being correct is not important. For example, shopping recommendations, advertizing, search results, image captioning, etc. All of these usecases have humans consuming the output, and LLMs can play a useful role as productivity boosters.


And none of those are crucial.

His point is that the world is RELIANT on GenAI. This isn't true.


The full quote from 7:40 in the video: "I think it's kind of fascinating to me that when the state-of-the-art LLMs go down, it's actually kind of like an intelligence brownout in the world. It's kind of like when the voltage is unreliable in the grid, and the planet just gets dumber. The more reliance we have on these models, which already is really dramatic and I think will continue to grow."

I don't think his point was that LLMs are as crucial as the power grid, or even close. He's just saying that he finds the comparison interesting, for whatever reason. If you find it stupid instead, that's okay.


I'm just saying that the statement "when the state-of-the-art LLMs go down, it's actually kind of like an intelligence brownout in the world" is entirely false.


All analogies are false, but some are useful.


Even an LLM could tell you that that's an unknowable thing, perhaps you should rely on them more.


Has a critical service that you used meaningfully changed to seemingly integrate non-deterministic "intelligence" in the past 3 years in one of its critical paths? I'd bet good money that the answer to literally everyone is no.

My company uses GenAI a lot in a lot of projects. Would it have some impact if all models suddenly stopped working? Sure. But the oncalls wouldn't even get paged.


Tesla FSD, Waymo are good examples.


Why was this flagged?


For simple tedious or rote tasks, I have templates bound to hotkeys in my IDE. They even come with configurable variable sections that you can fill in afterwards, or base on some highlighted code before hitting the hot key. Also, its free


I trust chatgpt and gemini a lot less than stackoverflow. On stackoverflow I can see the context that the answer to the original question was given in. AI does not do this. I've asked chatgpt questions about cmake for instance that it got subtly wrong, if I had not noticed this it would have cost me aa lot of time.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: