Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this hits at the heart of why you and so many people on HN hate AI.

You see yourselves as the disenfranchised proletariats of tech, crusading righteously against AI companies and myopic, trend-chasing managers, resentful of their apparent success at replacing your hard-earned skill with an API call.

It’s an emotional argument, born of tribalism. I’d find it easier to believe many claims on this site that AI is all a big scam and such if it weren’t so obvious that this underlies your very motivated reasoning. It is a big mirage of angst that causes people on here to clamor with perfunctory praise around every blog post claiming that AI companies are unprofitable, AI is useless, etc.

Think about why you believe the things you believe. Are you motivated by reason, or resentment?



Find a way to make sure workers get the value of ai labor instead of bosses and the workers will like it better. If the result is "you do the same work but managers want everything in 20% of the time" why would anyone be happy?


I agree that if there are productivity gains that everyone should benefit, but the only thing that would allow this to happen are systems and incentive structures that allow that to happen. A manager's job is to increase revenue and cut costs, that's how they get their job, how they keep their job, and how they are promoted. People very rarely get free benefits outside the range of what the incentive structures they exist in allow them to.


> I agree that if there are productivity gains that everyone should benefit

And if they don't, then you'd understand the anger surely. You can't say "well obviously everybody should benefit" and then also scold the people who are mad that everybody isn't benefiting.


i’m not scolding anyone who is mad that not everyone is benefitting


What are you doing then?


I mean you certainly fooled us all in that front


And people don’t like this. Something being logical doesn’t mean people have to accept it.

Also AI has been basically useless every time I tried it except converting some struct definitions across languages or similar tasks, it seems very unlikely that it would boost productivity by more than 10% let alone 400%.


What AI coding tools/models have you been using?


Let me guess ... they're holding it wrong, and the model they're using is older than 20 minutes.


You’re assuming how i would respond before i even respond. Please allow inquiries to happen naturally without polluting the thread with meritless cynicism.


With all due respect, with a response like "What AI coding tools/models have you been using?" to a complaint that AI tools just don't seem to be effective, what difference does a reply to that even make? If your experience makes you believe that certain tools are particularly good--or particularly bad--for the tasks at hand, you can just volunteer those specifics.

FWIW, my own experiences with AI have ranged from mediocre to downright abysmal. And, no, I don't know which models the tools were using. I'm rather annoyed that it seems to be impossible to express a negative opinion about the value of AI without having to have a thoroughly documented experiment that inevitably invites the response that obviously some parameter was chosen incorrectly, while the people claiming how good it is get to be all offended when someone asks them to maybe show their work a little bit.


Some people complain about AI but are using the free version of ChatGPT. Others are using the best models without a middleman system but still see faults, and I think it’s valuable to inquire about which domains they see no value in AI from. There are too many people saying “I tried AI and it didn’t work at all” without clarifying what models, what tools, what they asked it to do, etc. Without that context it’s hard to gauge the value of any value judgement on AI.

It’s like saying “I drove a car and it was horrible, cars suck” without clarifying what car, the age, the make, how much experience that person had driving, etc. Of course its more difficult to provide specifics than just say it was good or bad, but there is little value in claims that AI is altogether bad when you don’t offer any details about what it is specifically bad at and how.


> It’s like saying “I drove a car and it was horrible, cars suck” without clarifying what car, the age, the make, how much experience that person had driving, etc.

That's an interesting comparison. That kind of statement can be reasonably inferred to be made by someone just learning to drive who doesn't like the experience of driving. And if I were a motorhead trying to convert that person to like driving, my first questions wouldn't be those questions, trying to interrogate them on their exact scenario to invalidate their results, but instead to question what aspect of driving they don't like to see if I could work out a fix for them that would meaningfully change their experience (and not being a motorhead, the only thing I can think of is maybe automatic versus manual transmission).

> there is little value in claims that AI is altogether bad when you don’t offer any details about what it is specifically bad at and how.

Also, do remember that this holds true when you s/bad/good/g.


We're still in the early days of LLMs. ChatGPT was only three years ago. The difference it makes is that without details, we don't know if someone's opinion is still relevant, because of how fast things have moved since the original GPT-3 release of ChatGPT. If someone half-assed an attempt to use the tools a year ago, and hasn't touched them since, and is going around still commenting about the number of R's in strawberry, then we can just ignore them and move on because they're just being loudmouths who need everyone else to know they don't like AI. If someone makes an honest attempt, and there's some shortcoming, then that can be noted, and then the next version coming out of the AI companies can be improved.

But if all we have to go on is "I used it and it sucked" or "I used it and it was great", like, okay, good for you?


> With all due respect, with a response like "What AI coding tools/models have you been using?" to a complaint that AI tools just don't seem to be effective, what difference does a reply to that even make?

"Damn, these relational databases really suck, I don't know why anyone would use them, some of the data by my users had emojis in them and it totally it! Furthermore, I have some bits of data that have about 100-200 columns and the database doesn't work well at all, that's horrible!"

In some cases knowing more details could help, for example in the database example a person historically using MySQL 5.5 could have had a pretty bad experience, in which case telling them to use something more recent or PostgreSQL would have been pretty good.

In other cases, they're literally just holding it wrong, for example trying to use a RDBMS for something where a column store would be a bit better.

Replace the DB example with AI, same principles are at play. It is equally annoying to hear people blaming all of the tools when some are clearly better/worse than others, as well as making broad statements that cannot really be proven or disproven with the given information, as it is people always asking for more details. I honestly believe that all of these AI discussions should be had with as much data present as possible - both the bad and good experiences.

> If your experience makes you believe that certain tools are particularly good--or particularly bad--for the tasks at hand, you can just volunteer those specifics.

My personal experience:

  * most self-hosted models kind of suck, use cloud ones unless you can get really beefy hardware (e.g. waste a lot of money on them)
  * most free models also aren't very good, nor have that much context space
  * some paid models also suck, the likes of Mistral (like what they're doing, just not very good at it), or most mini/flash models
  * around Gemini 2.5 Pro and Claude Sonnet 4 they start getting somewhat decent, GPT 5 feels a bit slow and like it "thinks" too much
  * regardless of what you do, you still have to babysit them a lot of the time, they might take some of the cognitive load off, but won't make you 10x faster usually, the gains might definitely be there from reduced development friction (esp. when starting new work items)
  * regardless of what you do, they will still screw up quite a bit, much like a lot of human devs do out there - having a loop of tests will be pretty much mandatory, e.g. scripts that run the test suite and also the compilation
  * agentic tools like RooCode feel like they make them less useless, as do good descriptions of what you want to do - references to existing files and patterns etc., normally throwing some developer documentation and ADRs at them should be enough but most places straight up don't have any of that, so feeding in a bunch of code is a must
  * expect usage of around 100-200 USD per month for API calls if the rate limits of regular subscriptions are too limiting
Are they worth it? Depends. The more boilerplate and boring bullshit code you have to write, the better they'll do. Go off the beaten path (e.g. not your typical CRUD webapp) and they'll make a mess more often. That said, I still find them useful for the reduced boilerplate, reduced cognitive load, as well as them being able to ingest and process information more quickly than I can - since they have more working memory and the ability to spot patterns when working on a change that impacts 20-30 files. That said, the SOTA models are... kinda okay in general.


People are also tired of rolling their eyes


Start a worker-owned tech co-op? Not much point though, since people are going to pay AI to write their code instead, and so the market for consultants will dry up. Probably lots of market space for fixing up broken AI code though :)


Don't you think consultants get hired to fix up code? If so, why would their market dry up? If anything, I would expect it to explode


That was what I said in my last sentence. The market that will dry up is writing code in the first place.


[flagged]


Angry entitled people who are raving about off topic stuff should quiet down! -- Sincerely, other angry entitled people of opposite alignment who prefer to control topics under discussion.

I wondered whether you had a track record of promoting "creative solutions to problems through code" and saw your last submission was an attempt to drum up outrage about google trials, suggesting someone "should be investigated for conflicts of interest, and perhaps disbarred". Yes, purely items of technical interest for hackers, very un-politicized.


Look this isn't a forum for general advocacy of Marxist political thought, it just isn't. That's off topic.

Whereas it IS a forum for discussing the biggest tech court case of the century.

The site was not established to give equal time to all political ideologies in all threads, which is what you seem to be implying.

This is all in the Hacker News guidelines. Let me paste the relevant part for you since you don't seem to know about it:

Hacker News Guidelines

What to Submit

On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.

Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.


About those rules. Any high-profile court case is by definition crime/politics, and all over TV news. Have you recently mentioned the name of any famous person(s) in your comments, offering opinions and critiques perhaps? But that's another way of saying "celebrities". So I'd say you're unambiguously failing on 4 of these criteria. In terms of forming teams and bitching about the refs decisions, not much difference between court-cases and sports either, neither are big on inspiring curiosity. But hey.. rules are for other people right boss?

And stop telling people to "look". You look. Because listen, I know that phrases like this one are well-loved by a certain type of person. Shows who's the adult in the room, and also frightens subordinates into silence, right? But understand me now when I say that it's much too transparent when used too often. Realize that there are other adults in the room, and when you toss out too many imperatives too fast then it's easy to see how much you want to control people as well as the topics under discussion.


You are just throwing ad hominems around, distorting facts in order to attack me, and contributing nothing that other people would want to read at this point. The fact of the matter is that the Google case is relevant to this forum and random political baiting is not. I hope it is actioned and your slop is dealt with as it detracts from the quality of this community.


Hackers find communism interesting :)


> The original premise of the site was for startup founders and hackers to discuss startup founder and hacker things.

Good that people finally open their eyes.

> But, also I am not one of them, I am a capitalist, I own a small business

And that's a false-consciousness at its finest. You are closer to these workers than to those that really own capital.


Not really. I grew up below the poverty line. Everyone I knew was working class. Some of these people are still very dear to me.

However, I did something which virtually none of them did. I wanted to get out. So I identified a skillset that the world desperately wanted, and I spent thousands of hours learning how to build and sell with that skillset. Frequently I was criticized or ridiculed or simply ignored, but it worked and I made money. Then I used that money to amass assets.

This doesn't mean I think Elon Musk is my friend or a good guy or something or even that I think the system is just, I don't. But I correctly identified the ladder out of the working class trap. I have Marxist friends who didn't. They're still poor. They still won't listen. Their lives still suck.

The biggest thing I don't like about these Marxist politics warriors is that they actually seem resigned to a future where huge corporations control our destiny, as long as an even huger government extracts some value for the little people. They seem to think that will work but I saw in my own life that concentration of wealth and power always fails the little guy. My philosophy is that it's better to bust all the big guys down (don't mistake this for an endorsement of unregulated capitalism, it isn't) and give everybody the ability to amass their own wealth by creating businesses. I think this idea is way more hackerish than Marxism, because it's all about people building and creating without needing someone else's permission.

If you are a worker, you should want to become an owner. You should not want to appeal to a higher authority for a distribution, because the hand that feeds you will always control you. You will not be free. You should strive to own and control your own slice of the pie because that is the only thing which will make you free. The more workers we can convert into owners, the better, and I'm not talking some fantastical idea of collective ownership here (at least in today's system, it basically doesn't exist).


  > The biggest thing I don't like about these Marxist politics warriors is that they actually seem resigned to a future where huge corporations control our destiny, as long as an even huger government extracts some value for the little people. 
legit question, what kind of marxist wants large corporations controlling everything?


Ironically, this is actually pretty close to what Marx writes in capital. Small owner operated businesses for artisans are a model he talks about, and owning your own tools and means of creating value and so on.

The state owning things on your behalf is not very true to the spirit of it at all, I would say.


> Not really. I grew up below the poverty line. Everyone I knew was working class. Some of these people are still very dear to me.

It wouldn't be false consciousness if you would be from wealthy family.

Yes, working people should climb the ladder, but without government intervention and putting collars on wealthy necks there won't be any ladder for them. There would be no small business owners like you. This is why I think you are very wrong when stating you are not one of them. In a sense that is true, but in other you have more common interest with those without any capital that with those really wealthy. But those really wealthy definitely want you to think otherwise.

And I have an impression your view of Marxism is forged on Reddit posts and not Marxists literature.


Small business owners for some reason always think they are in the same class as the other business owners.


I own my company so have no fear of losing my job - indeed I'd love to offload all the development I do, so I have no resentment against AI.

But I also really care about the quality of our code, and so far my experiments with AI have been disappointing. The empirical results described in this article ring true to me.

AI definitely has some utility, just as the last "game changer" - blockchain - does. But both technologies have been massively oversold, and there will be many, many tears before bedtime.


> Are you motivated by reason, or resentment?

I think most people are motivated by values. Reason and emotion are merely tools one can use in service of those.

My experience that people who hew too strongly to the former tend to be more oblivious to what's going on in their psychology than most.


" ..hate ai..."

Bad framing and worse argument. It's emotional.

Every engineer here is evaluating what ai claims it can do as pronounced by ceos and managers (not expert in software dev) v reality. Follow the money.


> Follow the money.

Yeah, it's frustrating to see someone opine "critics are motivated by resentment rather than facts" as if it were street-smart savvy psychoanalysis... while completely ignoring how many influential voices boosting the concept have a bajillions of dollars in motive to speak as credulously and optimistically as possible.


It is not tribalism - I am self-aware enough to recognize my self-interest, and it in conflict with the interests of Sam Altmans of this world and modern slave-masters, sorry, managers.

But I am not claiming that AI is useless. It is useful, but I would rather destroy every data center that enjoy strengthening of techno-feudalism.


This makes the logical fallacy that reasoning born out of resentment is always wrong, of course. It is possible for someone to be as you describe and also correct - I imagine this armchair psychoanalysis is way off though.


Love a bit of source analysis.

I'd widen the frame a bit. People scared of losing their jobs might underestimate the usefulness of AI. Makes sense to me, it's the comforting belief. Worth keeping in mind while reading articles sceptical of AI.

But there's another side to this conversation: the people whose writing is pro AI. What's motivating them? What's worth keeping in mind while reading that writing?


I know it's probably childish and irrational and a symptom of my inferior intellect, but I have to ask, where's the proof that any of this shit works as well as AI stans claim it does?

Please, enlighten me with your gigantic hyper-rational brain.


If you believe AI is overvalued and is a bubble waiting to burst then you are free to short NVDA.

AI stans don’t become AI stans for no reason. They see the many enormous technological leaps and also see where progress is going. The many PhDs currently making millions at labs also have the right idea.

Just look at ChatGPT’s growth alone. No product in history compares, and it’s not an accident.


No product except tulips :^)


“Markets can remain irrational longer than you can remain solvent.” - Keynes


Did you read TFA, which shows that developers are slower with AI and think they're faster?

The two types of responses to AI I see are your very defensive type, and people saying "I don't get it".


Bruce Tognazzini wrote that people always claim keyboard is faster than mouse but when researchers actually measured that, it turned out mouse was faster. Bruce explained that mousing is a low-cognition activity compared to keying so subjective perceptions are skewed.


This is highly misleading: https://danluu.com/keyboard-v-mouse/


Tognazzini wrote a magazine column with all the downsides: overly funny, non-academic, etc. I think Tog meant something like selecting commands from a menu vs using a command line across a range of applications. Anyway, studies like that must be somewhere in Proceedings of CHI, I guess. (Just checked bibliography in "Tog on interface", but nothing seemed to match. Found a comparison of different types of menus, but that's different. But also relevant: I guess most people would say using pop-up menu right at the mouse cursor will be faster than a fixed one at the top of the screen, yet the experiment shows the opposite.)

Mousing implies things are visible and you merely point to them. Keyboard implies things are non-visible and you recall commands from memory. These two must have a principal difference. Many animals use tools: inanimate objects lying around that can be employed for some gain. Yet no animal makes a tool. Making a tool is different from using it because to make a tool one must foresee the need for it. And this implies a mental model of the world and the future, i.e. a very big change compared to simply using a suitable object on the spot. (The simplest "making" could be just carrying an object when there is no immediate need for it, e.g. a sufficiently long distance. Looks very simple and I myself do not know if any animals exhibit such behavior, it seems to be on the fence. It would be telling if they don't.)

I think the difference between mousing and keying is about as big as of using a tool and making a tool. Of course, if we use the same app all day long, then its keys become motor movements, but this skill remains confined to the app.


The article is one person recording their own use of AI, finding no statistical significance but claiming since that the evaluated ratio of AI:human speed in performing various coding tasks resembled the METR study, that AI has no value. People have already talked about issues with the METR study, but importantly with that study and this blog post, it querying a small number of people using AI tools for the first time, working in a code base they already have experience and deep understanding of.

Their claim following that is that because there hasn't been an exponential growth in App store releases, domain name registrations or Steam games, that, beyond just AI producing shoddy code, AI has led to no increase in the amount of software at all, or none that could be called remarkable or even notable in proportion to the claims made by those at AI companies.

I think this ignores the obvious signs of growth in AI companies which providing software engineering and adjacent services via AI. These companies' revenues aren't emerging from nothing. People aren't paying them billions unless there is value in the product.

These trends include

1. The rapid growth of revenue of AI model companies, OpenAI, Anthropic, etc. 2. The massive growth in revenue of companies that use AI including Cursor, replit, loveable etc 3. The massive valuation of these companies

Anecdotally, with AI I can make shovelware apps very easily, spin them up effortlessly and fix issues I don't have the expertise or time to do myself. I don't know why the author of TFA claims that he can't make a bunch of one-off apps with capabilities avaliable today when it's clear that many many people can, have done so, have documented doing so, have made money selling those apps, etc.


> "These companies' revenues aren't emerging from nothing. People aren't paying them billions unless there is value in the product."

Oh, of course not. Just like people weren't paying vast sums of money for beanie babies and dotcoms in the late 1990s and mortgage CDOs in the late 2000s [EDIT] unless there was value in the product.


Those are fundamentally different. If people on this site really can't tell the difference then it makes sense why people on HN assume AI is a bubble.

People paid a lot for beanie babies and various speculative securities on the assumption that they could be sold for more in the future. They were assets people aimed to resell at a profit. They had no value by themselves.

The source of revenue for AI companies has inherent value but is not a resell-able asset. You can't resell API calls you buy from an AI company at some indefinite later date. There is no "market" for reselling anything you purchase from a company that offers use of a web app and API calls.


The central issue here is whether the money pouring into AI companies is producing anything other than more AI companies.

I think the article's premise is basically correct - if we had a 10x explosion of productivity where is the evidence? I would think some is potentially hidden in corporate / internal apps but despite everyone at my current employer using these tools we don't seem to be going any faster.

I will admit that my initial thoughts on Copilot were that "yes this is faster" but that was back when I was only using it for rote / boilerplate work. I've not had a lot of success trying to get it to do higher level work and that's also the experience of my co-workers.

I can certainly see why a particular subset of programmers find the tools particularly compelling, if their job was producing boilerplate then AI is perfect.


Yeah AI code is ideal for boilerplate, converting between languages, basically anything where the success criteria are definite. I don’t think there is a 10x productivity upgrade across the board, but in limited domains, yes, AI can produce human level work 10x faster.

The fundamental difference of opinion people have here though is some people see current AI capabilities as a floor, while others see it as a ceiling. I’d agree with arguments that AI companies are overvalued if current models are as capable as AI will ever be for the rest of time, but clearly that is not the case, and very likely, as they have been every few months over the past few years, they will keep getting better.


Which way is the rate of change going?


Dotcoms and CDOs absolutely had perceived intrinsic value


> The article is one person recording their own use of AI

It's not ONE person. I agree that it's not "every single human being" either, but more of a preliminary result, but I don't understand why you discount results you dislike. I thought you were completely rational?

https://www.theregister.com/2025/07/11/ai_code_tools_slow_do...


> The rapid growth of revenue of AI model companies, OpenAI, Anthropic, etc.

You can't use growth of AI companies as evidence to refute the article. The premise is that it's a bubble. The growth IS the bubble, according to the claim.

> I don't know why the author of TFA claims that he can't make a bunch of one-off apps

I agree... One-off apps seem like a place where AI can do OK. Not that I care about it. I want AI that can build and maintain my enterprise B2B app just as well as I can in a fraction of the time, and that's not what has been delivered.


Bubbles are born out of evaluations, not revenue. Web3 was a bubble because the money its made wasn't real productivity, but hype cycles, pyramid schemes, etc. AI companies are merely selling API calls, there is no financial scheming, it is very simply that the product is worth what it is being sold for.

> I want AI that can build and maintain my enterprise B2B app just as well as I can in a fraction of the time, and that's not what has been delivered.

AI isn't at that level yet but it is making fast strides in subsets of it. I can't imagine systems of models and the models themselves won't reach there in a couple years given how bad AI coding tools were just a couple years ago.


Does the revenue cover the costs?


I'm motivated by Claude Code producing useless garbage every time i ask it to do anything, and Google giving me AI summaries about things that don't exist


> their apparent success

Yeah so the thing is the "success" is only "apparent". Having actually tried to use this garbage to do work, as someone who has been deeply interested in ML for decades, I've found the tools to be approximately useless. The "apparent success" is not due to any utility, it's due entirely to marketing.

I don't fear I'm missing out on anything. I've tried it, it didn't work. So why are my bosses a half dozen rungs up on the corporate ladder losing their entire minds over it? It's insanity. Delusional.


I don’t agree. HN is full of technical people, and technical people see LLMs for what they truly are: pattern matching text machines. We just don’t buy into the AGI hype because we’ve seen nothing to support it.

I’m not concerned for my job, in fact I’d be very happy if real AGI would be achieved. It would probably be the crowning tech achievement of the human race so far. Not only would I not have to work anymore, the majority of the world wouldn’t have to. We’d suddenly be living in a completely different world.

But I don’t believe that’s where we’re headed. I don’t believe LLMs in their current state can get us there. This is exactly like the web3 hype when the blockchain was the new hip tech on the block. We invent something moderately useful, with niche applications and grifters find a way to sell it to non technical people for major profit. It’s a bubble and anyone who spends enough time in the space knows that.


>This is exactly like the web3 hype when the blockchain was the new hip tech on the block. We invent something moderately useful, with niche applications and grifters find a way to sell it to non technical people for major profit.

LLMs are not anything like Web3, not "exactly like". Web3 is in no way whatsoever "something moderately useful", and if you ever thought it was, you were fooled by the same grifters when they were yapping about Web3, who have now switched to yapping about LLMs.

The fact that those exact same grifters who fooled you about Web3 have moved onto AI has nothing to do with how useful what they're yapping about actually is. Do you actually think those same people wouldn't be yapping about AI if there was something to it? Yappers gonna yap.

But Web3 is 100% useless bullshit, and AI isn't: they're not "exactly alike".

Please don't make false equivalences between them like claiming they're "exactly like" each other, or parrot the grifters by calling Web3 "moderately useful".


Calling LLM's "pattern matching text machines" is a catchy thought-terminating cliche, which accounts to calling a human brain a "blob of fats, salts, and chemicals". It technically makes sense, but it is seeing the forest for the trees, and ignores the fact that this mere pattern patching text machine is doing things people said were impossible a few years ago. The simplicity and seeming mundanity of a technology has no bearing on its potential or emergent properties. A single termite, observed by itself, could never reveal what it could build when assembled together with its brethren.

I agree that there are lots of limitations to current LLM's, but it seems somewhat naive to ignore the rapid pace of improvement over the last 5 years, the emergent properties of AI at scale, especially in doing things claimed to be impossible only years prior (remember when people said LLM's could never do math, or that image models could never get hands or text right?).

Nobody understands with greater clarity or specificity the limitations of current LLM's than the people working in labs right now to make them better. The AGI prognostications aren't suppositions pulled out of the realm of wishful thinking, they exist because of fundamental revelations that have occurred in the development of AI as it has scaled up over the past decade.

I know I claimed that HN's hatred of AI was an emotional one, but there is an element to their reasoning too that leads them down the wrong path. By seeing more flaws than the average person in these AI systems, and seeing the tact with which companies describe their AI offerings to make them seem more impressive (currently) than they are, you extrapolate that sense of "figuring things out" to a robust model of how AI is and must really be. In doing so, you pattern match AI hype to web3 hype and assume that since the hype is similar in certain ways, that it must also be a bubble/scam just waiting to pop and all the lies are revealed. This is the same pattern-matching trap that people accuse AI of making, and see through the flaws of an LLM output while it claims to have solved a problem correctly.


No, it´s really not - it's exactly what they are. Multi-dimensional pattern matching machines, using massive databases put together from resources like stack overflow, Clegg's (every cheaters go to for assignment answers, massive copyright theft etc.). If that wasn´t the case, there wouldn't be jobs right now writing answers to feed into the databases.

And that´s actually quite useful - given that most of this material is paywalled or blocked from search engines. It´s less useful when you look at code examples that mix different versions of python, and have comments referring to figures on the previous page. I´m afraid it becomes very obvious when you look under the hood at the training sets themselves, just how this is all being achieved.


Look into every human’s brain and you’d see the same thing. How many humans can come up with novel, useful patents? How many novel useful patents themselves are just variations of existing tech?

All intelligence is pattern matching, just at different scales. AI is doing the same thing human brains do.


> Look into every human’s brain and you’d see the same thing.

Hard not to respond to that sarcastically. If you take the time to learn anything about neuroscience you'll realise what a profoundly ignorant statement it is.


If that is the case, where are the LLM-controlled robots where LLM is simply given access to bunch of sensors and servos, and learns to control them on its own? And why are jailbreaks a thing?


Seeing as your LLMs need the novel output of human brains to even exist or expand capabilities, quite a lot.

But even if it's not a lot, it's more than the number of LLMs that can invent new meaning which is a grand total of 0.


If tomorrow, all human beings ceased to exist, barring any in-progress operations, LLMs would go silent, and the machinery they run on would eventually stop functioning.

If tomorrow, all LLMs ceased to exist, humans would carry on just fine, and likely build LLMs all over again, next time even better.


> It's an emotional argument, born of tribalism. I’d find it easier to believe many claims on this site that AI is all a big scam and such if it weren’t so obvious that this underlies your very motivated reasoning.

Damn, when did it become wrong for me to advocate in my best interests while my boss is trying to do the same by shoving broken and useless AI tools up my ass?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: