Hacker News new | past | comments | ask | show | jobs | submit | paulsutter's comments login

The best I can offer skeptics is the more you work with the tools the more productive you become. Because yes the tools are imperfect.

If you've had a dog you know that "dog training" classes are actually owner training.

Same with AI tools. I see big gains for people who spend time to train themselves to work within the limitations. When the next generation of tools come out they can adapt quickly.

If this sounds tedious, thats becuase it is tedious. I spent many long weekends wrestling with tools silently wrecking my entire codebase, etc. And that's what I had to do to get the productivity improvements I have now.


"Expert in (now-)ancient arts draws strange conclusion using questionable logic" is the most generous description I can muster.

Quoting Chomsky:

> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.

> One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learning, or other aspects of cognition, a matter of principle, irremediable... The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively.

Response from o3:

LLMs do surface real linguistic structure:

• Hidden syntax: Attention heads in GPT-style models line up with dependency trees and phrase boundaries—even though no parser labels were ever provided. Researchers have used these heads to recover grammars for dozens of languages.

• Typology signals: In multilingual models, languages that share word-order or morphology cluster together in embedding space, letting linguists spot family relationships and outliers automatically.

• Limits shown by contrast tests: When you feed them “impossible” languages (e.g., mirror-order or random-agreement versions of English), perplexity explodes and structure heads disappear—evidence that the models do encode natural-language constraints.

• Psycholinguistic fit: The probability spikes LLMs assign to next-words predict human reading-time slow-downs (garden-paths, agreement attraction, etc.) almost as well as classic hand-built models.

These empirical hooks are already informing syntax, acquisition, and typology research—hardly “nothing to say about language.”


> LLMs do surface real linguistic structure...

It's completely irrelevant because the point he's making is that LLMs operate differently from human languages as evidenced by the fact that they can learn language structures that humans cannot learn. Put another way, I'm sure you can point out an infinitude of similarities between human language faculty and LLMs but it's the critical differences that make LLMs not useful models of human language ability.

> When you feed them “impossible” languages (e.g., mirror-order or random-agreement versions of English), perplexity explodes and structure heads disappear—evidence that the models do encode natural-language constraints.

This is confused. You can pre-train an LLM on English or an impossible language and they do equally well. On the other hand humans can't do that, ergo LLMs aren't useful models of human language because they lack this critical distinctive feature.


Is that true? This paper claims it is not.

https://arxiv.org/abs/2401.06416


Yes it's true, you can read my response to one of the authors @canjobear describing the problem with that paper in the comment linked below. But to summarize: in order to show what they want to show they have to take the simple, interesting languages based on linear order that Moro showed a human cannot learn and show that LLMs also can't learn them and they don't do that.

The reason the Moro languages are of interest are that they are computationally simple so it's a puzzle why humans can't learn them (and no surprise that LLMs can). The authors of the paper miss the point and show irrelevant things like there exist complicated languages that both humans and LLMs can't learn.

https://news.ycombinator.com/item?id=42290482


> You can pre-train an LLM on English or an impossible language and they do equally well

It's impressive that LLMs can learn languages that humans cannot. In what frame is this a negative?

Separately, "impossible language" is a pretty clear misnomer. If an LLM can learn it, it's possible.


The latter. Moro showed that you can construct simple language rules, in particular linear rules, like the third word of every sentence modifies the noun, that humans have a hard time learning (specifically they use different parts of their brain in MRI scans and take longer to process than control languages) and are different from conventional human language structure (which hierarchical structure dependent, i.e. roughly that words are interpreted according to their position in a parse tree not their linear order).

That's what "impossible language" means in this context, not something like computationally impossible or random.


Ok then .. what makes that a negative? You're describing a human limitation and a strength of LLMs


It's not a negative, it's just not what humans do, which is Chomsky's (a person studying what humans do) point.

As I said in another comment this whole dispute would be put to bed if people understood that they don't care about what humans do (and that Chomsky does).


Suggestion for you then, in your first response you would have been clearer to say "The reason Chomsky seems like such a retard here, is because he clings to irrelevant nonsense"

It's completely unremarkable that humans are unable to learn certain languages, and soon it will be unremarkable when humans have no cognitive edge over machines.

Response: Science? "Ancient Linguistics" would more accurately describe Chomsky's field of study and its utility


> Suggestion for you then, in your first response you would have been clearer to say "The reason Chomsky seems like such a retard here, is because he clings to irrelevant nonsense"

If science is irrelevant to you it's you who should have recognized this before spouting off.


[flagged]


What's dangerous about him?


We have specific industrial tasks to train and we’re taking a closer look at this as an alternative to the hard-to-reach bigcorps that have their eyes too far down the road. We want to start now and push the current tech as far as it can go

Looking forward to helping however we can


Awesome! Would love to chat, here's my email: ben@kscale.dev


I don’t see any indication that it’s English-first?


Why is this a worry? Sounds wonderful


I'm a bit worried about the social impacts.

When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.

It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.


> They will have to go back to training

Why? When we've seen a sector collapse, the new jobs that rush in to fill the void are new, never seen before, and thus don't have training. You just jump in and figure things out along the way like everyone else.

The problem, though, is that people usually seek out jobs that they like. When that collapses they are left reeling and aren't apt to want to embrace something new. That mental hurdle is hard to overcome.


What if no jobs, or fewer jobs than before, rush in to fill the void this time? You only need so many prompt engineers when each one can replace hundreds of traditional workers.


> What if no jobs, or fewer jobs than before, rush in to fill the void this time?

That means either:

1. The capitalists failed to redeploy capital after the collapse.

2. We entered into some kind of post-capitalism future.

To explore further, which one are you imagining?


The capitalists are failing to redeploy capital today. Thats why they have been dumping it into assets for years. They have too much capital and dwindling things they can do with it. AI will skyrocket their capital reserves. There is a poor mechanism for equalizing this since the Nixon years.


> They have too much capital and dwindling things they can do with it.

Yes, we've had full employment for a long, long time. But the idea here is that AI will free up labor that is currently occupied doing something else. If you are trying to say it will fail to do that, that may be true, but if so this discussion is moot.


As others in this thread have pointed out, this is basically what happened in the relatively short period of 1995 to 2015 with the rise of global wireless internet telecommunications & software platforms.

Many, many industries and jobs transformed or were relegated to much smaller niches.

Overall it was great.


Man 1995, what a world that was. Seemed like a lot less stress.


Good thing that we have AI tools that are tireless teachers


I spend much more time coding now that I can code 5x faster

Demand for software has high elasticity


Work just to be a part of it. This is the most consequential time in history.

It's the best time ever to build. Don't work on anything that could have been done two years ago.

Learn the current tools - so that you can adapt to the new tools that much faster as they come out.


We just moved to San Francisco with toddlers and it’s great, as long as you have a parking space (car seats preclude most transportation)


Is it possible to get a 3 bedroom place that isn’t terrible on one FAANG salary?


I bet these models could create a python program that does this


Maybe eventually, but I bet it’s not going to work with less than 30 minutes of effort on your part.

If “It might take an hour of my time.” to get the correct answer then there’s a lower bond for trying a shortcut that might not work.


Google Pagerank

Could use follows, retweets, etc instead of page links


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: