Hacker News new | past | comments | ask | show | jobs | submit login

I cannot imagine a scenario where it doesn’t drop off.

The massive recent improvements in GPT’s performance are a result of giving the model enormous numbers of parameters and a wealth of training data. That’s it.

Surely this paradigm cannot scale up indefinitely. Moore’s law is moribund.

Are we going to build a supercomputer that encircles the globe just for the purpose of trying to make the biggest LLM we can?

Also, even we could scale indefinitely, there is no reason to suspect that an LLM with hundreds of quadrillions of parameters will somehow magically spontaneously become an AGI.

It’s tempting to think that the line will go up forever, but that just doesn’t square with reality.

Anyone whose response to this is something like “well, maybe all the brain is doing is just the same thing that LLMs do” is fundamentally underestimating the complexity of the human brain by many orders of magnitude.

It is the most complex system in the known universe and we do not understand it at all.

I am generally optimistic about AI, but to me it is the absolute height of delusional hubris to think that superintelligence is likely to somehow “fall out” of a language model, just as soon as we make one large enough and give it enough training data.

To come to such a conclusion reveals a failure to grasp the magnitude of the problem.




> Surely this paradigm cannot scale up indefinitely.

It doesn't have to. It just has to scale up long enough to start causing real societal problems, if it isn't already there.

Also it's kind of annoying to see you dismiss any of this criticism as 'alarmist' in thread after thread when if you look back a year at best the state that we are in today was said to be at least several years away by the same people who are continuously harping on the fact that this isn't AGI yet. The point is: it doesn't have to be to do massive damage and from that perspective it might as well be. I don't particularly care if I get bitten by the cat or by the dog, I care about being bitten.


I didn’t dismiss anything. I didn’t say anything was alarmist. Please don’t put words in my mouth.

You didn’t say anything about societal problems. You wondered if the growth will ever stop, and I tried my best in good faith to give the reasons why I believe that it will.

If the question is “when will the models be powerful enough to cause societal problems”, then that is a completely different question and I think the answer to it is clearly “they already are”. (But not because they are superintelligent or anything close to AGI.)


> I didn’t dismiss anything. I didn’t say anything was alarmist. Please don’t put words in my mouth.

https://news.ycombinator.com/item?id=35276186

"We’re still decades from AGI in my opinion, and the Chicken Little types ought to pace themselves, is all I’m saying. "


Oh—

I see now you were making reference to a comment I made in reply to someone else.

Yes, that was something that I said and I stand by it.

I do not see any reason to be concerned about AI as an existential threat at the present moment.

I have explained in my previous comment why I feel this is the case; if you feel that this view is recklessly dismissive and wish to change my mind, then I invite you to do the same.

Edit: I’m sorry for any excessive crispiness or combativeness in my tone; I can see now from your post history that you are likely arguing in good faith.

I have grown weary of arguing against concern trolls lately on this topic, so I may have misjudged your initial comment based on its brevity. Sorry about that.


First off: labeling those you don't agree with as concern trolls is pretty rude, but since the HN etiquette requires looking for the best way to explain your comment I take it to be that you meant that as somewhere else rather than on HN. The number of concern trolls here is vanishingly low, most people on HN when they are concerned about something are so for good reasons even if those are not readily apparent to you without further engagement.

As for my own concerns: we have a bit of a problem with this AI thing and whether or not it is AGI or not is immaterial: I judge a technology by the effect that it is having. We have not yet made a dent in dealing with the weaponization of social media, are beginning to deal with the mobile revolution and the internet we now take for granted. Given that that took us a good 30 years to get to this point and that the current crop of AI tools is on the scene for a little over two years it looks as though there is still a very long way to go before we have internalized the changes this technology brings.

And it isn't exactly standing still either, it's a fast moving target that redefines what it is and isn't and what it can and can not do in the space of months. We are now well into what I would lightly characterize as an AI arms race and during arms races the rate of change can go through the roof compared to what it is was before. You only have to look at nature to see many such examples.

And already ChatGPT and similar tools by other vendors are changing the landscape in visible ways. It doesn't have to be an existential threat to be capable of profound and possibly negative social impact. And whether it is AGI or not is also not all that important.

Those cautioning some pacing of the release of these tools are not doing so because they are concern trolls but because they look a little further than just 'hey, cool new tech' to the effect this can have on our societies, some of which are already precariously balanced and have a whole pile of other stuff to deal with. Least of all the fall-out of COVID (which we definitely have not yet dealt with), an energy crisis and a war. And that's before we get into climate change.

Releasing a tool that could easily be weaponized by either side (or both) in such an environment could well have repercussions that we might be able to foresee and help us to decide on whether or not they are going to be beneficial or not. Like all tools this one is dual use, it may help or it may well hinder. Initially social media was a nice way to re-acquaint with family and friends, some of whom may have been lost or out of touch for ages. These days it is a weapon for mass manipulation on a scale that we have not seen before.

Something similar - or far worse - could easily happen with these new AI tools and personally I would like to have the previous crisis before me settled before trying on the next. There is a limit to how much of this stuff we can deal with at the same time and - again, just speaking for myself here but there may be others that feel similar - I am rapidly approaching the limit of how much of all this I can still comprehend and internalize and deal with while still being able to stay on top of it all. It is, in a single world, overwhelming and those that want to pretend it is all inconsequential are - in my opinion, once more, not thinking about it hard enough.


Firstly, LLMs are an embarrassingly parallel problem, so yes, you can actually get quite far simply by throwing more hardware at it. The catch is that the gain is not linear - e.g. you need 4x more VRAM for 2x inputs / context window size. But if doing that unlocks more useful emergent properties, it may well be a worthwhile trade-off - and it doesn't have to spontaneously become an AGI for that to happen. So I think we'll be playing this game for quite a long time.


> massive recent improvements in GPT’s performance are a result of giving the model enormous numbers of parameters and a wealth of training data. That’s it.

extremely dismissive of the labor that went into converting AGI from "impossible" to "expensive"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: