Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are you tired of reading ChatGPT headlines?
364 points by 65 on Feb 4, 2023 | hide | past | favorite | 284 comments
I am. Every day, there are countless new articles about ChatGPT posted on here. Maybe I'm the only one who thinks it's overrated.

Most of the prompt answers are smart sounding bullshit. Maybe that's why the headlines never stop - the people who like to make smart sounding bullshit are the ones who love ChatGPT.




No - to many people chatgGPT is science fiction come true. And for older folks in our (technical) world, it is, too.

I understand it's 'just' a language model, but it doesn't matter - that's not how it's perceived, and it is actually rather impressive anyway.

What you're seeing is just a manifestation of a rather major (sociological) event, and while I understand that the amount of hype is a bit over the top, to a large extent it makes sense.


> I understand it's 'just' a language model, but it doesn't matter - that's not how it's perceived

For me, this is what gets exhausting. It is an impressive language model, and it is going to change a lot of things, but it isn't the Apocalypse and it isn't the Messiah. On HN I expect a certain level of technical understanding that is weirdly absent on this topic.

An example: every time the question of accuracy comes up people seriously suggest that you just need to prompt it for its sources. Or we just need to improve the training data. There's no recognition that there are tasks that a language model is just fundamentally unsuitable for.


When all you have is a hammer, all problems look like nails?

I'm genuinely impressed about ChatGPT, and have been thinking about many times in the past when having such a tool at hand would have been massively helpful. Natural Language Processing is a damn hard problem, and ChatGPT seems to be a huge advancement in that regard.

But I actually laugh at all the people that think that this will replace humans in any meaningful capacity. If your job is only giving known answers to known problems, then you have something to fear. Otherwise this will only be a powerful productivity tool.

A Language Model will replace software developers much like Excel replaced accountants.


I worked for a while in a fundamental neuroscience research environment. Basically there was one supplier of VERY expensive equipment in the field. But research groups that shelled out lots of money were now restricted to only research subjects the machine could do. This actually change the focus from fundamental exploration research to directed research. I think ChatGPT is the same. It will limit what most people believe AI is and what it is used for. (On a basic level isn't it just data mining on a grand scale?) The fundamental problem of "truth" is not considered important in the hype. If it can't deliver anything that you know absolutely to be "truth" without having to verify it then it is just a shiny new toy. I think the headlines and hype are generated to gloss over the shortcomings of this field in general. (is it a sign of the times whats that other media generated hype that makes money ----blockchain?)


This is actually an interesting reply, and something I did not consider.

To me, the most impressive part of ChatGPT was not that it could give mostly correct answers to known problems. In a sense, internet search could do it already (just in a much more cumbersome way), with similar degrees of correctness.

The most impressive part for me was actually how seamlessly it parses and produces fluent natural language. Text generated by it reads like something a human would type.

So far I didn't try to fool it by purposefully asking something ambiguous (something that is a characteristic of natural languages), or ask about something that has an ambiguous answer to see how it handles it, but so far I'm impressed.

But I never considered that people may restrict the research of AI to language models due to the rampant success of this avenue of research. I hope this is not the outcome, but I wouldn't be surprised (i.e. the success of ChatGPT works as a blackhole for investment in the area, with everyone racing to cash in on it).


"According to the Planet Money podcast, in the US alone, there are 400,000 fewer accounting clerks today than in 1980, the first full year that VisiCalc went on sale.

But Planet Money also found that there were 600,000 more jobs for regular accountants. After all, crunching numbers had become cheaper, more versatile, and more powerful, so demand went up.

The point is not really whether 600,000 is more than 400,000: sometimes automation creates jobs and sometimes it destroys them."

https://www.bbc.com/news/business-47802280


And I still need to hire and accountant to do my taxes every year.

Just throwing numbers on a spreadsheet didn't do the trick.


This is a manufactured problem by companies who create tax software who lobby the American government to increase the complexity of the tax code. I have never needed to hire an accountant to do my personal taxes in any country other then the USA.


> The point is not really whether 600,000 is more than 400,000

Sounds to me the existing jobs are more skilled than the former ‘accounting clerks’ which just sounds like data entry people.

Back in the day my mom used to do the books for the grocery store she worked for and all she really did was tally up all the data from the multiple cash registers and send it off to the corporate accountants. Not a whole lot of skill needed aside from attention to detail to ensure the totals were correct.


> There's no recognition that there are tasks that a language model is just fundamentally unsuitable for.

I’m really not sure where you get that from. I think the best we can say is that they are currently unsuitable. Yes, today the applications are limited, but you can’t blame people for projecting this incredible progress into the future a bit and seeing glimpses of how much this could change work and productivity. Insider reports are surfacing that both Google and Microsoft will be adding LLM outputs to their search engine result pages. Seems like a big deal to me considering transformers are only about 4 years old.


> Insider reports are surfacing that both Google and Microsoft will be adding LLM outputs to their search engine result pages.

Of course they are—they'd be incredibly stupid not to ride the hype wave while it lasts. A business choosing to take advantage of current trends is not evidence that those trends are founded in anything substantial.


> * isn't the Apocalypse and it isn't the Messiah*

It is a wakeup call though. Considering how quickly this was developed, the likelihood of either of these manifesting within our lifetime became much more plausible.


"An example: every time the question of accuracy comes up people seriously suggest that you just need to prompt it for its sources."

This is not possible. Of course, someone providing GPT as a "service" could provide bogus "sources" for particular output and people might take them at face value.


I agree, and one more thing is that it _is_ useful.

If somebody thinks it's just a bullshit generator and 100 million people using it after 2 months are wrong, the problem is with the person who didn't put in the effort to learn to use it effectively.


> If somebody thinks it's just a bullshit generator and 100 million people using it after 2 months are wrong, the problem is with the person who didn't put in the effort to learn to use it effectively.

Alternative explanation: Most people are happy to generate bullshit, and love that they now have a way to do it with zero effort. Bullshit copy, bullshit art, bullshit code — the sky's the limit!


Why is the code bullshit? It routinely helps me with repetitive stuff.


If your code is repetitive, you should deal with that by abstracting it, not by using a probabilistic black box to generate more.


That's sometimes easy, sometimes not so easy due to the boilerplate requirements of the language/framework/library one uses. Other times it introduces a trade-off between repetition and code complexity/readability, so it's not the silver bullet your comment suggests it to be.


My humble prediction is that this will happen regardless of AI quality improvements. People imagine socialist scifi scenarios with crystal balls on green lawns telling them how to build spaceships. But reality will organically grow from where we are now:

Attention and profiling is money and power.

AI will steal attention by spamming identities, voting for/against products and opinions, discriminating in ways hard to reveal, at scale never seen before. It will read all your HN comments and clip a tag onto your ear. AI war will not be a T1000 walking on skulls, it will happen in our feeds, and platforms/governments will have no idea who’s artificial or russian this time.

Honestly it’s pretty concerning when I think about what can be done with the internet and societies by bad actors automating and weaponizing existing chatgpt and pic/video tech alone.


Which, if my experience with users is anything to go by, will be 99% of the users. Even worse is people asking it about subjects they have no domain expertise in and having to wonder (or worse just assume) its correct. At the very least, it should give some easily understood indication of 'confidence'.

edit: I get very strong flashbacks to when wikipedia was new and people had to learn the hardway it wasn't always correct/up-to-date/etc.


The thing is, it's always confident. It gave you the highest confidence answer, out of many high confidence answers. Low confidence would imply the model has not seen the particular words before. A Markov chain does not suffer from imposter syndrome.


Heh - I say the thing is to get the general public to realize that, and not assume "computers are always right" ? (eg, I don't think the problem is with chatGPT being 'wrong' - I think the problems start when people assume its right...)


Do you consider yourself in the 99% or you're as confident in yourself as ChatGPT to be in the 1% smartest people in the world?


Depends on the domain I'm asking it about? I have things I'm good at, and things I suck at.


> What you're seeing is just a manifestation of a rather major (sociological) event, and while I understand that the amount of hype is a bit over the top, to a large extent it makes sense.

This hyperbole is exactly why people like OP are getting sick of the headlines. It may be a major event sure, but only hindsight will be able to say that for certain. It may be another novel gimmick though, and this pronunciation will seem ridiculous in a year or two if that's the case. It's just too early to tell.


I agree. I showed ChatGPT to my 70 year old mom who was a Biology professor at a university all her life.

We asked some questions about her field, and even asked it to draw a cell through SVG. My mon confirmed that all if its output was correct.

Mom was completely amazed at what was happening. She couldn't believe it was answering like that, and so fast. For people 60+ ChatGPT is StarTrek science fiction.

These are the guys that had to write dead tree letters to ask for a paper in the other side of the world. And shuffle through stacks of physical bibliographic cards. Also they spent hours looking through tomes of Encyclopedia Britannica in search of info.

Us younger folks have just been slowly boiled like the frog with all this new technology. But it is indeed amazing.


And it's the first iteration of an impactful model. Wait till the second and third generations come out. I've promised myself I have to keep on top of what it's doing.


I am definitely tired of the topic, but I’ll be perfectly honest as to the reason why: I am genuinely afraid of its impact on my career. My career has been sort of a combination of tech and education, and this (admittedly impressive) creation threatens both. And more and more of these things will be coming online over the coming months and years.

So, every time I see it mentioned I feel depressed and sick on the inside.

I will say, however, that so far it has actually enhanced my productivity in my current projects, and that’s fine. I just don’t think that its impact will remain so limited for long.

Of all the tech that’s been invented, this is the one I fear the most in terms of its negative impact on jobs. Hope I’m wrong!


Don’t worry, the models do not show any signs of creativity or understanding. Your intellect still has value, and imo this technology is not the invention that will eventually devalue it. It’s kinda like self driving cars. 5-10 years ago we assumed we were just a few years away from never having to drive ourselves again. Turns out that’s way further out than we had imagined.


I can do things as one software engineer what 10 years ago a whole team needed.

If chatgpt can do things you would have given an junior or assistant and it's now faster to just do it yourself it will have an impact and it might raise the bar.

I would neither under or overestimate what it can replace in a few years.

And surprisingly a lot of time and energy is used to teach people old things just because they also need to learn.

It just might be e easier to teach it once to chatgpt or whatever is next, than people.


What we have been developing are a lot of solid building blocks. Protocol, infra, frameworks. Not just some kind of heuristic engine.


I haven't.

Most of us haven't.

We are reusing combining etc.


ChatGPT can’t do the things I coach my junior teammates to do, though.


Depends on what (I didn't say it will replace everything tomorrow) and let's see in a few years.

And at least in my team it definitely feels like I teach every new hire/intern/young person the same old things.

I could also just starting to teach an ai those things.

And this multiplies. If everyone of us is only teaching the same ai everything once, than this is much more effective.


We haven't really tried coaching it yet.

I don't think it has enough context to be a real coder yet, but it has some ability.

Having the token limit unlocked in the future might change a lot.


Self driving cars work right now. The problem is the cost of failure. If your self driving car works 99% of the time but it kills someone every hundredth trip on average, that's unusable. If your LLM works 99 out of 100 times, that's extremely useful.


Saying “work” is kind of a stretch. Until I see a self driving car navigating through the streets of Bengaluru in the peak traffic periods, I would say it’s still far far away.

All self driving companies have this bias. If it works in US and Europe, it works everywhere. It’s like the same old saying of “it works on my computer. Looks good. Let’s put it on production”. We all know how the story ends ;)


“It’s useless until it works in every single scenario”. Sure that might be fair for self driving cars, but again, doesn’t apply much to LLMs. I don’t care if my chatbot can’t give me accurate answers for medical or physics questions. If it works for the stuff I want, at least most of the time, it’s very useful.


Not Bengaluru but would you take Shanghai for $500? https://youtu.be/PVMCjvsP6O8


Based on the standard of driving in most places I don’t think many drivers can safely navigate that kind of thing either.


> Self driving cars work right now. The problem is the cost of failure.

Self-driving cars only work on some roads specifically adapted for self-driving cars, and even then they require a team of specialists constantly monitoring them (so it's not very self-driving, the driver just moved to another place).

If they require a specially designed roads, they don't replace cars, they replace trams. And trams are much more efficient, so they don't really replace even them.

You can't put a self-driving car on a random road and expect it to work.


> Self-driving cars only work on some roads specifically adapted for self-driving cars

I don’t think that is actually a thing. The places self driving cars are being used right now do not have roads that were especially adapted to support self driving cars.


Not a single routing app that I've tried consistently manages to get you to the correct side of the building when you request a route to a location.

If the problem of figuring out where to drive isn't even solved yet, how can you claim that "self driving cars work right now". They don't, not for any useful definition of the term.


That’s not really a problem with the ‘self driving’ but a lack of data.

Unless they send a human out to every building and mark out which door is the correct one the algorithm is just guessing. I use a “professional” GPS for my job and I don’t trust it at all to get me to the correct entrance, I have to study satellite images and type in the coordinates manually and even then it’ll decide to reroute to the wrong place on occasion because it doesn’t know there’s a gate in the way of its optimal path.

Bit of a hassle, really. If you ever see a big truck stuck on some random road with nowhere to turn around it’s probably the GPS’s fault. One of the first things new drivers need to learn is the GPS will actively try to kill you and can’t be trusted.


I’m not making a direct comparison in terms of the technologies capabilities, but rather the way we perceive(d) them from being just a few years away from taking over.


I certainly hope so self-driving cars are the accurate reference class for this, for several reasons.

Among them: I don't think I've ever demonstrated more creativity than chatGPT, only equalled it; and while it sure does make mistakes, when I look back at my old code (or even blog posts) I realise I sure did a lot of that too. I'm pretty surprised it's even as capable as it is, given my understanding of how it works and what it's "goals" are (the task "predicting tokens" doesn't seem like it should be able to do this much).

My fear is that the reference class for this is Go, where someone writing an AI for the game thought it was a decade away from beating humans less than two year before AlphaGo beat Lee Sedol: https://www.wired.com/2014/05/the-world-of-computer-go/


I would say they show at least "signs" of creativity and understanding.


Entrails of birds (used in old civilizations) could show «"signs"» of future events, but that can happen to be restricted to the perception of the reader.


If it looks creative but is secretly formulaic, how is that going to avoid causing problems for the employment prospects in "creative" jobs?

It doesn't matter if the thing a submarine does is "swimming", after all.


> matter if the thing a submarine does is "swimming"

We are supposed to want the submarine to swim well.



Yes, of course it is Dijkstra. The image is part of the speech "The threats to computing science" in 1984 - https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/E...

In context: Dijkstra noted that IT discipline had strong regional tints, enabled by some "«malleability»" (over which the particularizing forces are applied). This "malleability" follows very fuzzy definitions of purpose, which he exemplified through a von Neumann that resembled medieval scholastic philosophers and an Alan Turing that proposed (to the judgement of Dijkstra) "irrelevant" perspectives in the same direction, such as "whether submarines swim" (for "whether computers think").

Now: the context instead of this branch of discussion is about those "signs" noted by OP, which I note are overwhelmed by evidence that those signs are doubtful. The point is not in the vague metaphors that Dijkstra found confusing, but in the opposite flat matter that "whether it swims or it "submarines" [as a verb], it has to do it properly". Which is not in Dijkstra, because he was speaking of something else.


Further out than we imagined? There are several cities with operating robotaxies right now. You can buy a Tesla that can drive itself (with supervision) right now.

We’re certainly behind where we expected to be, but the future is indeed here!


Those robotaxi services are severely limited in scope, unprofitable, and still manage to screw things up so badly that even SF is getting sick of them:

https://www.theverge.com/2023/1/29/23576422/san-francisco-cr...


Do they screw up per-mile worse than a human driver?


From the article:

> Months later, a Cruise AV “ran over a fire hose that was in use at an active fire scene,” and another Cruise vehicle almost did the same at an active firefighting scene earlier this month. Firefighters say they could only stop the vehicle from running over the hose after “they shattered a front window” of the car.

I don't know about per-mile, but that's worse than a human for sure.


That's not worse than humans have done. Especially an elderly or drunk human.


In tech and don't share your worries. If anything, since release it has proven that 1. it does not have ability to reason logically creatively. 2. it has strong ability to manipulate language to human readable form.

If anything, I would say regular office workers are the most threatened, particularly if the jobs revolves around digesting and passing information.


Sure. But will that be the case 5-10 years from now?

As a programmer, I gotta say if you're not at all concerned, then either you haven't been paying attention or you're in denial. Sooner or later it's coming.


Or you have been paying attention, understand the technology better than those who buy into the hype, and aren't worried because you know that incremental improvements to this tech cannot be a serious threat to your job security because the paradigm isn't capable of replacing you.

When my job is at risk it will be because we have AGI, not a better language model, and at that point everyone is at risk. Worrying about job security in the face of AGI is like worrying about what you'll do after your city gets nuked: it's unlikely to happen and there's nothing you can do if it does.


Why think only about "incremental" improvement? People aren't just making slight tweaks, new papers are published at a remarkable rate where people try significantly different architectures, training methods, etc, and that steady progress leads to ever more impressive results. How can you assume this direction of research will lead nowhere?

OK, ignore everyone who doesn't understand the technology. Of those of who do, I'm utterly amazed how pessimistic many are that this "isn't capable" of leading to AGI. Probably not Transformers specially, but LLMs show that intelligence is remarkably easy. You don't even need to put anything in the neural architecture designed to perform reasoning tasks, but they can be learnt regardless, because Transformers are flexible enough to learn to emulate computation (Turing machines) with bounded space and time, going beyond the famous result that 2-layer MLPs are universal function approximators.


> Probably not Transformers specially, but LLMs show that intelligence is remarkably easy.

LLMs show that language is remarkably easy. Ever since GPT-3 was released, I've been convinced that language comprehension isn't nearly as big a component of general intelligence as people are making it out to be. This makes some intuitive sense: I recall a writer for a tabloid expressing that they simply turn off their brain and start spinning up paragraphs.

But so far, I haven't seen any of these models perform logical reasoning, beyond basic memorization and reasoning by analogy. They can tell you all day what their "reasoning process" is, but the actual content of any step is simply something that looks like it would fit in that step. Where do you derive this confidence that advanced logical reasoning is a natural capability of transformer models? (Being capable of emulating finite Turing machines is hardly impressive: any sufficiently large finite circuit can do that.)


>Ever since GPT-3 was released, I've been convinced that language comprehension isn't nearly as big a component of general intelligence as people are making it out to be

"X is the key to intelligence"

computers do X

"Well actually, X isn't that hard..."

rinse and repeat 100x

At some point you have to stop and reflect on whether your concept of intelligence is faulty. All the milestones that came and went (arithmetic, simulations, chess, image recognition, language, etc) are all facets of intelligence. It's not that we're discovering intelligence isn't this or that computational feat, but that intelligence is just made up of many computational feats. Eventually we will have them all covered, much sooner than the naysayers think.


> All the milestones that came and went (arithmetic, simulations, chess, image recognition, language, etc) are all facets of intelligence.

Why should I have to care about those weird milestones that some other randos came up with once upon a time? I've never espoused any of those myself, so how is this supposed to prove anything about my thought process?

> It's not that we're discovering intelligence isn't this or that computational feat, but that intelligence is just made up of many computational feats. Eventually we will have them all covered, much sooner than the naysayers think.

Well, it certainly appears to me like there's a big qualitative difference between the capabilities you mentioned (arithmetic and simulations are just applications of predefined algorithms; chess, image recognition, and language are memorization, association, and analogy on a massive scale) and the kind of ad-hoc multi-step logical reasoning that I'd expect from any AGI. You can argue that the difference is purely illusory, but I'll have a very hard time believing that until I see it with my own eyes.


>so how is this supposed to prove anything about my thought process?

Because its the same thought process that animated theorists of the past. Unless you have some novel argument to demonstrate why language isn't a feature of intelligence despite wide acceptance pre-LLMs, the claim can be dismissed as an instance of this pernicious pattern. Just because computers can do it and it isn't incomprehensibly complex, doesn't mean it's not a feature of intelligence.

>Well, it certainly appears to me like there's a big qualitative difference between the capabilities you mentioned... and the kind of ad-hoc multi-step logical reasoning that I'd expect from any AGI.

I don't know what "qualitative" means here, but I agree there is a difference in kind of computation. But I expect multistep reasoning to just be variations of the kinds of computations we already know how to do. Multistep reasoning is a kind of search problem over semantic space. LLM's handle mapping the semantic space, and our knowledge from solving games can inform a kind of heuristic search. Multistep reasoning will fall to a meta-computational search through semantic space. ChatGPT can already do passable multistep reasoning when guided by the user. An architecture with a meta-computational control mechanism can learn to do this through self-supervision. The current limitations of LLMs are not due to fundamental limits of Transformers, but rather are architectural, as in the kinds of information flow paths that are allowed. In fact, I will be so bold as to say that such a meta-computational architecture will be conscious.


I think that's more representative of tabloid writers than anything, haha. Understanding text is difficult, and scales with g. GPT-3 can make us believe that it can comprehend text that falls in the median of internet content, and I guess there would have to be some edge cases addressed by the devs, but it can't convince humans that is understands more difficult content, or even content that isn't in its db.


I totally agree with your comments on language. I was stretching it to cover "intelligence" too, what I should have said is "many components of intelligence". It really isn't one thing. But I think analogical reasoning is one of the most important, maybe the most important component! I'm not alone. [1]

> Where do you derive this confidence that advanced logical reasoning is a natural capability of transformer models?

("Advanced logical reasoning" is asking a lot, more than I wanted to claim.) I was going off papers like [2] which showed very high accuracy for multi-hop reasoning by fine tuning RoBERTa-large on a synthetic dataset, including for more hops than seen in training (although experiments "suggests that our results are not specific to RoBERTa or transformers, although transformers learn the tasks more easily"). While [3] found "that current transformers, given sufficient training data, are surprisingly robust at solving the resulting NLSat problems of substantially increased difficulty" but "transformer models’ limited scale-invariance suggests they are far from learning robust deductive reasoning algorithms". I think that low scalability is to be expected, transformers don't have a working memory on which they can iterate learnt algorithmic steps, only a fixed number of steps can be learnt (as I was saying).

Unfortunately, looking for other papers, I found [4] which pours a lot of cold water on [2], saying "a deeper analysis reveals that they appear to overfit to superficial patterns in the data rather than acquiring the logical principles governing the reasoning in these fragments". I suppose you were more correct. I still think there's more than just memorisation happening here, and it isn't necessarily dissimilar to intuitive (rapid) 'reasoning' in humans, but as with everything in LLMs, everything is muddied because capability seems to be a continuum.

[1] Hofstadter, 2001, Analogy as the core of cognition, http://worrydream.com/refs/Hofstadter%20-%20Analogy%20as%20t...

[2] AI2, 2020, RuleTaker: Transformers as Soft Reasoners over Language, https://allenai.org/data/ruletaker

[3] Richardson &al. 2021, Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability https://arxiv.org/abs/2112.09054

[4] Schlegel &al. 2022, Can Transformers Reason in Fragments of Natural Language? https://arxiv.org/abs/2211.05417


I never said incremental improvements to LLMs won't lead anywhere, I said they won't replace me. A sibling has already commented on why that would be, and I agree with them.

I just wanted to chime in and remind about the other part of my argument: my job is not threatened until we have AGI, and AGI would be so earth-shattering to the entire premise of our economy that there's literally no point in worrying about it as an individual. We can and should talk about society-level changes like UBI, but having individual anxiety about your own personal job is a strange response to the end of the entire global economic system.


> You don't even need to put anything in the neural architecture designed to perform reasoning tasks, but they can be learnt...

That sounds interesting. Can you provide a reference to this research?


See my reply to sibling: https://news.ycombinator.com/item?id=34672865

A more interesting example of transformers learning a process may be [1].

There's a large literature on applying language models to reasoning tasks, but not many on what's actually going on inside them. But see for example [2]. Also https://transformer-circuits.pub/ has a body of work on it, but still at a very early stage (see in particular "In-context Learning and Induction Heads").

[1] Extraction of organic chemistry grammar from unsupervised learning of chemical reactions https://www.science.org/doi/10.1126/sciadv.abe4166

[2] Analyzing the Structure of Attention in a Transformer Language Model https://arxiv.org/abs/1906.04284


Yep, I subscribe to this viewpoint. I am not as smart as the creators of ChatGPT or whatever is to follow, so maybe we’ll get lucky that AGI is pretty smart but can’t improve itself. But I think in the general case, if we create AIs that can replace programmers, economic concerns aren’t going to matter.


Yes. And we'll be in the true exponential times, because humans will be building wonderfully complex software in an instant, so we'll build huge, robust, open ecosystems in just a few moments.

The programming job market going away will be the last thing on my mind at that time, I think it would be one of the biggest shifts in human history.


While I share your take on LLMs. I do add on the worry of may take a couple of years for the people who pay software engineers to figure out that they aren’t replaceable yet.


As a programmer that had a year of data scientist under my belt, and some level of understanding of the current machine learning systems, I'm not worried.

The newer research papers coming out are where I would focus if I'm really worried. ChatGPT is really the industrial grade implementation of ideas that aren't exactly new. And the idea itself (LLM) does not contain ability to generate novel logic or solve unlearned problems. In fact, I would go further and say that it does not see your prompt input as a logical problem, but rather a collection of words, and its output involves a transformation of words with its extensively training set. Which would contain logic that was in the training material but itself did not add anything to it, with no guarantee of correctness, only training weights.

There are no guarantee that it didn't also train on the question part of stackoverflow... and you only get answers as good as its training material.

Where I work we attempt to automate away repeatable problems with deterministic frameworks, which are imo far better. I would agree that ChatGPT will probably improve my writing if I were to use it on email and documents, but only in language, not in ideas.


> The newer research papers coming out are where I would focus if I'm really worried. Great reply, could you expand on this?


I wouldn't know until they come out, I don't think there are anything that smell like artificial general intelligence right now, pattern matching is much more dominant today.


> Sure. But will that be the case 5-10 years from now?

Not with language models. A language model can parse natural language, and with enough training data, give out what it thinks the answer is based on the data it was trained with. It is not General AI.

It cannot reason a solution for a problem that had an unknown answer. It won't be able to reflect logically on a context to foresee problems within this context. It cannot have a meaningful conversation. It won't be able to understand that one of the things it "knows" was incomplete, untrue, or just plain wrong, and fix itself.

It's a powerful tool, a game-changing tool. Perhaps as game-changing as the advent of computers, internet, or wireless communication. But it still won't replace humans.

General AI for now is science fiction. Perhaps this is unfortunate. I wouldn't mind an AI that can replace humans, even if I too am made obsolete with it.


> General AI for now is science fiction. Perhaps this is unfortunate. I wouldn't mind an AI that can replace humans, even if I too am made obsolete with it.

Maybe I’m optimistic, but I feel like we need AGI to reach the next level of development as a civilization. If software engineering jobs are the price to pay so be it. World hunger, medical science, energy, space travel, if we can get all of these to take a ride on something resembling Moore’s Law we are in for a one hell of a fantastic future in our lifetimes.


I actually think AGI will bring about the collapse of civilization, and perhaps the end of humanity. I'm okay with it.

I also think it will solve things such as energy and space travel. World hunger, medical science (among other human problems) will become meaningless.

"Rejoice glory is ours / Our young men have not died in vain / Their graves need no flowers / The tapes have recorded their names"


When ChatGPT was first released, I read a memorable comment on HN (paraphrasing from memory): It looks like AI is poised to take over all the things that I enjoyed doing as a human such as art, music, storytelling, teaching, programming.

There only needs to be one intelligent species in any ecosystem. If AI becomes that species, humans will be relegated to the role of horses - only good for menial physical labor. That is, of course, only until the AI invents cars.


My guess is that this will become a problem roughly around the time when automated theorem proving becomes viable.

Until then, language models just regurgitate sentences. Anything that relies on the sentences being logically coherent cannot rely on these models. This includes engineering, finance, and journalism (ideally, although I am aware of CNET's experiment).

After then, we salaried humans might be in a pickle.

Edit: sentence.


Another to consider is that increased programmer productivity for simple code may result in just more code.

If any given music synthesizer company could produce it's own DAW (digital audio workstation) by employing a single programmer or a contractor, the demand for programmers could increase.

Just speculating, it's hard to know.


>Sure. But will that be the case 5-10 years from now?

A big part of the failings of cryptocurrency and self-driving cars was that predictions of mass disruption were contingent upon the tech improving linearly from the current state. That didn't happen.


Shouldn't non-programming office workers be more concerned is what I'm thinking.


when a program will be smart enough to write at least as usable code as me, i'll happily hang up my keyboard and take up gardening.

until that happens what's the point of worrying? the year of the linux desktop joke got retired anyway so let's use self driving cars and "ai programmers" :)


Education faces the biggest problems of all. And it's self-inflicted. A large portion of what's required of students is listening to, regurgitating and producing half-coherent bullshit. And so teachers will have a hard verifying homework now.

What should happen is teachers and students are required to up their skills to the point that student are required to produce things better than AI. But the likely result will be lots of futile surveillance.


I see several possible solutions. One could be the teacher typing the same question in the chatbot and finding some subtle but wrong results and then looking for the same errors in the students texts.


It was impossible to verify homework before ChatGPT. People just cheated off of the smart kid and copied the work. Fundamentally the same problem as ChatGPT.


I'm using it to cut my engineering time down. If you're not simultaneously elated and threatened, you're sleeping.

The AI wave is going to be bigger than the internet in terms of societal impact.

We're watching humanity's swan song in real time.


I don't understand this perspective at all. The hardest parts of software are nailing down requirements, scheduling work, and fitting square pegs into round holes for legacy systems at a conceptual level.

No offense, but if writing the actual code is the part slowing you down there's something very wrong.


I'll bet you that everything you just outlined as being "hard" will see startups launched and successful within 5-10 years.

> The hardest parts of software are nailing down requirements, scheduling work, and fitting square pegs into round holes for legacy systems at a conceptual level.

Then why do we hire PMs and have engineering ICs? The only reason any of this is hard is because 1) there are a lot of stakeholders and moving pieces and 2) humans are subject to context switching productivity losses. Machines will absolutely be able to inject themselves into these business processes and chip away at our inefficiencies.

I bet you you're wrong. I quit my $400k+/yr TC job to focus on AI because I believe in this so strongly.

Let's check back in five years.


Don't bother betting other people. Your entire life savings should be invested in AI right now with your level of confidence.


That's more or less what I'm doing.

I'm hiring, btw.


How do you expect to make a difference without having a mega-server farm to train the Next Big Thing?

From what I’ve seen so far the ‘impactful’ models take tons of cash to initially train, tons of cash to continue to train and tons of cash to provide “free” access to the public to get the hypetrain chugging along.

Not criticizing but this is what I think people should be worried about, the “future” locked behind a handful of super rich corporate firewalls. Not even mentioning they have real-time data feeds on virtually everything they can pump through their AI for whatever purposes they can dream up. And I’m not even paranoid…


> Machines will absolutely be able to inject themselves into these business processes and chip away at our inefficiencies.

Actually, this I agree with. Chat bots in their current form would help bridge technical knowledge gaps between stakeholders.

Whether it would significantly lean up the less technical staff seems unlikely. Getting rid of engineering is even less likely.

> there are a lot of stakeholders and moving pieces

Yes. There are so many extending all the way out to internet discussions like this one and beyond to consumers, investors, etc... I think all this discussion is necessary to actually making anything of value. Will consulting chat bots instead of hacker news be the future, or is that where we admit there's something pathological going on?

> humans are subject to context switching productivity losses

I don't think letting people figure out what they want is a "productivity loss".


> The AI wave is going to be bigger

This (in context) is not the «AI wave». Artificial Intelligence is that which automates solution finding normally tasked to a professional. It has to be qualitatively adequate and competitive (which you could take as the real meaning of the "Turing test" - otherwise, "being scammed by con artists" was already a known factor and perspective at the time of the formulation). If any output fitted the definition, a random number generator (or a constant output) would fit as AI.

Go back to Popper (this name just for example): there has to be a metric for "success".


> swan song

Swan song that started as the singers started regarding IQs of 0 as acceptable actors.


Most of the actions you and I do are "0 IQ". There's a little bit of intelligence in the in between.


When I wrote about «act[or|ion]s», I of course meant the relevant ones,

and in general, actions are based - prescriptively - on concretions of layers of developed intelligence (wisdom), intelligence which is constantly applied and exercised (just with different occasional effort, for resource management). If somebody went sparingly, their own practice and our damage - if they are active.

So, again: a "sub-bohemian" idea that "anything goes" is the swan song. Wait till you actually needed intelligent action, to prevent critical losses.

--

I will rephrase coincisely: you are supposed to exercise intelligence at any action. If you do not do it, your vice.


Don't fear it. You need to understand it and use it in your career. The biggest fault it has is that it's not reliable and you can't tell if what you are getting is fact or fiction. So the biggest benefactor will be those that know how to get the BS out of the results so if you know your subject it will improve your productivity by a lot. A novice will get lost and have less good productivity.

There will be attempts at improving reliability but at some point it becomes too expensive and ineffective to try to improve it. People that know how to use it will really thrive.


> There will be attempts at improving reliability but at some point it becomes too expensive and ineffective to try to improve it.

At some point it will be self improving. Either through humans telling it to quit spewing bullshit or by learning fact-checking is a good system of improvement.


I really think that’s behind a lot of the negativity I see on HN. This tech is threatening, to jobs as we know them and maybe eventually to our sense of self. I view it like the introduction of computers into the workplace. Many people were left behind and made obsolete, never having a chance to be as productive as younger colleagues. AI collaboration could be a huge force multiplier, but some people, maybe myself included, will never get fully on board and will be left in the dust.


This Gizmodo article is pretty insightful on the complexities and use cases: https://gizmodo.com/chatgpt-gizmodo-artificial-intelligence-...

The problems have been well documented, and the part where model makes up info and presents it as fact is a huge problem. Whether it’s solvable is up for debate until it is actually solved.

What this article has shown is that the automation part can work well in somewhat limited circumstances, but in the end it is a form of automation and perhaps a novel user interface options. It is not creativity, it is not something genuinely new. Automation is powerful and it enables productivity goals, which is usually profitable.


I'd say if you're already working as a software engineer, you're one of the "safe" cohorts that made it through before the doors closed.

Because, by the time AI is good enough to be replacing developer jobs (5-10 years), you will be an experienced engineer who would be negotiating requirements, inputting the prompts, reviewing the output, working on the architecture and process framework within which the AI operates, etc.


I'm finding it interesting in that whole I likely have little benefit from using as a tool in many respects, I will likely have to spend time working with/mentoring junior folks who rely heavily on it over time, and I will need to figure out what it's covering for them and what it leaves out... And whether I'm giving prompts to gpt or talking to a person so they can come up with gpt prompts


At the moment it's too error-prone to do anything without human interaction. Progress has been fast recently but will slow down because there's less new data to train on and scaling hardware will get too expensive.


It won't threaten you job, it will do the opposite. It will make you 10x more productive and thus more valuable.


Or…

It will make 10% 10x more valuable and 90% obsolete.


That's the thing, the amount of programming work that needs to be done or can be done is probably 100x than the amount that is being done.


No. This is a paradigm changing technology - the kind that happens once every 10 years or so. ChatGPT is going to change the entire landscape of knowledge work over the next few years and I'd rather be up to date with the advances even if it does mean I'll have to click the "More" button at the bottom of the page to see non-ChatGPT HN posts.


Strongly disagree. I think its fame will die out once people will get tired of reading articles written by soulless machine or being tricked into something that turned out to be GPT-made. That's why even tho Picasso paintings are awesome, 99.9% of population does not have a printout of Mona Lisa hanging on their wall.

There has never been technology that blew up like "chatGPT" before that made a real impact on society. That's why my take is its all a scam. Youtube "chatGPT" you will see how everyone and their dog are making $50,000 a month passive income using GPT. Of course they will tell you all their secrets... as long as you watch commercials attached to their videos.


Picasso didn’t paint the Mona Lisa… did you fact check w chatgpt


Incredibly short sighted perspective.

Chatgpt-backed tools will become the norm in the same way that every knowledge worker uses Google on a daily basis today. Note that I did not say that people will use it to replace search - search is but a small bit of how it can be used.


I can see it helping with troubleshooting and quickly getting some code example, documentation or whatever without googling, if I didn't have to wait a minute for each answer. But calling that a paradigm shift is quite an exaggeration. The only thing ChatGPT is really reliable at is correct use of language. Everything else is fancy Lorem Ipsum to show off those capabilities. The answers make sense a lot of the time but the quality is simply too inconsistent for me to have a use for it.

On one hand those are huge advancements but at the same time just another kind of software that doesn't work properly when I need it to.


As an example: grammarly is a company that makes money off getting users to "correct use of language". They make millions per year and this is just one use case of ChatGPT. Other use cases: copilot , AI personal assistant/scheduler, research summarizer (ex: Ought). Each of these is a multi million dollar business at worst.

Let's not forget that this is iteration 1 of the technology. It is only going to get better and better. The fact that it can already do all these independent tasks without having seen additional data for the task is a complete game changer.


I wouldn't call copilot a business until there's reasonable certainty that use of it can't get you sued. This is a topic right now in courts and the question isn't even if Microsoft and others have violated countless copyrights - they have - but whether they can get through with it.

> The fact that it can already do all these independent tasks without having seen additional data for the task is a complete game changer

There are some impressive examples, yes, but ironically they aren't among the ones getting hyped. Right now AI is only usable for one kind of application: where a few incorrect results don't matter at all. That means for me projects like Copilot are out - if I want buggy code I can just write it myself, and at least I understand my own code. ChatGPT quickly falls apart when you actually start testing it for something productive. I don't doubt others have some limited uses for it.


Are you debating that ChatGPT allows things that previously were impossible or whether these things are useful?


well, the soulless articles really aren't the use-case I'm interested in and the release of chatGPT already changed the way I work as software engineer. So, not all scam. I hope to have a tool like this still at hand in the future.


Found the chatbot


I hope you're joking.

GPT-3 has been available for more than a year and there has been no large scale impact at all.

The few AI generated articles are easy to spot and full of mistakes.

ChatGPT is great at one thing: rambling.


Not attacking you personally, but if there's anything around ChatGPT I'm tired of it's these kinds of dismissive comments. "Oh, this is not new at all, in fact it's merely a trivial language model, and similar to other text generation software!" posts the contrarian! And the goal post moving: "Sure, it can now do A, B, and C, but ChatGPT is not perfect and can't do X, Y, and Z, so obviously it's overhyped and not even slightly amazing!"

The output of this software would have been considered absolutely incredible to most of the Computer Science field 10 years ago, and would have been considered sorcery 20 years ago.

Yes, if you're one of the 0.003% of the world who are AI researchers, ChatGPT is probably somewhere on the spectrum between boring and kinda-impressive. To the rest of us, it's astounding.


I disagree there has been no large scale impact. We're starting to see it roll out, and there is a ton of potential here.

Msft literally just rolled out Teams premium. That's not nothing IMO. We're seeing it start to get integrated in actual products.


Hype isn't impact or potential.

Microsoft is jumping on the hype train to make a quick buck out of a cheap API, why shouldn't they? That doesn't mean it actually brings any value.

It's exactly the same as every video game company trying to shoe-horn NFTs into their games.

As a side note, it doesn't look like Microsoft is using GPT for its Teams premium, it's mostly about voice recognition and things like that than content generation...


Remember when Microsoft, IBM and Facebook jumped on the blockchain bandwagon?


I'm only sick of the op-ed authors who reveal halfway through their column that "everything above was written by ChatGPT". It's no longer an original angle and seems pretty lazy at this point.


It’s a good thing, I’m seeing less JS frameworks getting created now


Hilarious. Made me laugh. I remember there was a time when every other article was about VR on poptech websites.


Of course, it's tough to get people to adopt a new JS framework if ChatGPT can't emit the code for it. Stack Overflow is so old fashioned now.


I bet you can ask ChatGPT to code a a JS framework


I asked for some names:

"Naming a framework can be a crucial aspect of its success. It's important to choose a name that is memorable, catchy, and represents the core values and goals of the framework.

Here are a few suggestions:

Prodigy: A framework that aims to make web development simple and efficient.

Velocity: A fast-performing framework that emphasizes speed and agility.

Eon: A modern and innovative framework that incorporates the latest web development trends and technologies.

Apex: A framework that emphasizes scalability and reliability, helping developers build robust and scalable applications.

Fusion: A framework that seamlessly integrates different technologies, tools, and libraries to create a unified and seamless development experience."

I'm looking forward to hiring a prodigy.js expert with 10 years of prodigy.js experience.


>I bet you can ask ChatGPT to code a JS framework

Tried it, it did okay on some high-level stuff but couldn't get to working code: https://imgur.com/a/HModV0f

Do you think it's fair to hold it to the task of instantly creating a new JavaScript framework from scratch and implementing it? This is a task that multiple experts planning and collaborating in different areas of software engineering could perform together with extreme difficulty over a long period.

On the other hand, I was surprised that it came up with working, "correct" code for a difficult task:

https://imgur.com/a/Y2kumZq

Pastebin for the code:

https://pastebin.com/5JDvXtWi

I checked it and it is correct for n up to hundreds of thousands. (I'm not sure if that is really how matrix powers are supposed to work, but overall, its function returns correct exact solutions for the problem and worked without any modification.)

I find both conversations I just linked pretty impressive in their own ways.


Of course! I'd be happy to help you write a new JavaScript framework. Before we start, I'd like to ask a few questions to understand your goals and requirements for this framework: ....



Envy you seem not getting into tech twitter circle jerk yet. It's soul-crushing JS all the way down. Have you seen Typescript return type debate yet? Gosh I blocked them already.


I don’t like seeing hype articles about ChatGPT because I believe it is built on illegally monetizing intellectual property of others.

Sort of in line with the usual switchbait behavior certain companies are well known for, except at a grander scale: first compel people to share information openly and publicly, then use it against them.

We wouldn’t give Google a free pass if it stopped sending us traffic and started only showing aggregated information on its pages. Heck, there was an outcry when they merely started showing questions before search results, even though they actually credited the website. Somehow we are fine when OpenAI, the for-profit company with antithetical name, and Microsoft, its largest partner with which billions it plays, do the same.


I think IP law is overwhelmingly bullshit and it's exhausting for people to keep calling it problematic because of baseless accusations with no legal precedent.

On the off chance there is at some point some case law that builds around this - who cares. IP laws are garbage and problematic and in 99.9% of cases only serve to enrich millionaires and billionaires.


IMO recognition of intellectual property is precisely why Western society excelled in innovation so far, and ironically without it being the case the very tech powering products like ChatGPT would not exist.


In 99.9999% of cases, the people doing the actual innovative work are not the people who hold the patents. These people would do this work, patent or not


You mix up patents and copyright. It’s not required for you to patent everything you publish, you simply own it and people are not allowed (by law) to just take it and use in whatever way they want (e.g., alter it and pass as their own).

You may allow this by distributing it under a permissive license, or the platform you use to publish it may have terms that give it certain rights to your work, but by default you own the work and can fully control how it is allowed to be used.

Which principle OpenAI is basically throwing out of the window.

People who do the actual work do so because they take pride in it. The work is their child, and they feel valued because people know and appreciate them for doing it. If you believe that most people who do the actual original work do so without any pride or concern for recognition or control over their work, then you are living in a different world than I.


I think that the ones thinking that ChatGPT can write great prose haven't read actual great prose in years


I'm also annoyed by the hype. For me, it's a combination of two things:

1. I'm just naturally averse to any sort of hype. Maybe something I should talk to a therapist about because it's only rational to some degree. I've been building software for decades, and almost every hype that came up meant more boring/annoying work for me - usually around resolving misconceptions non-technical people get from the news. Rewarded only by the grim satisfaction that people finally get it a few years delayed ("blockchain technology" is my favourite example here).

2. I can only see the bad LLMs can bring: A further decline of quality in a ton of areas, having to deal with worse writing and user interfaces in my private life, and worse code and tools in my professional life. It's just largely dystopian in my mind, I don't see any benefits. Code writing speed is hardly a bottleneck for the kind of work I do, I actually find it to be a small, particularly satisfying part of it. Is it more fun for a violinist to not touch the instrument and instead just tell a machine what to play? Certainly not for me. Generating elaborate wiki pages and SEO spam? Hell no, it's bad enough without LLM assistance.

The strategy I developed over the years is to learn enough about the hyped thing to feel like I don't miss out on information I need to have in case it comes up, and then to largely ignore it. Things are rarely eaten as hot as they are cooked. Sometimes, I start to get into the hyped thing a little later on the hype cycle, having waited for the early adopters to figure out what it's actually valuable for. At that point I usually am not appalled by it anymore. There's nothing wrong with _not_ being an early adopter of something. And it's also not as if you can do anything to stop it, whatever is gonna happen, happens. All you can control is how you react to that, in what you do, as well as emotionally.


The problem is that our CTO, having in mind ChatGPT/GPT-3 is thinking to "shrink" our team of 1 leader + 3 senior devs + 19 junior devs to 1 leader/senior dev + 3 junior devs. The idea suggested from above I heard. They really "believe" that the bulk of the code can be generated by AI, and the team has only to manage the implementaton/integration/holistic view. They will make a plan to achieve this by Nov 2023.

I heard this trend is everywhere around at multinational corps in our country.


You're either lying or have the dumbest CTO known to humanity.


Not lying, my company (ex-bank) has a new CTO with background in... agricultural engineering.


Apparently your CTO hasn't realized yet that using chatGPT will render your company's code public domain since in mamy juridictions anything not created by a human being cannot be protected by copyright law.


If this is genuine then your CTO has no business being a CTO.


I have seen 2-3 of this kind of CTOs in my circle. Basically co-founders of rich CEO friend.


You got it! ;)


Over the years I have come to the conclusion that the T in CTO could literally be anything. It's just a coincidence that it happens to lead the technical organization. It could literally be anyone with some master degree in management or similar. The bigger the company, the more it's just an administrative job. Most of their work is delegated to some VP that, if you're lucky, will have some technical background.

The CTO is just a guy keeping the balance sheet in order and nodding at what Sales/CFO/CRO says.


6 months as our CTO. It has only one goal: reduce costs. Since he came, he's obsession was that all IT teams under his coordination are "too fat". The bad part is the hierarchy trusts him (for a reason). It's Balkans here... anything goes.


you should split your team to three subteams for more efficient communication and replace the cto with a chatbot


C is CTO is for chatGPT!


I'm not surprised at all at this point and even if it's getting worse for reasons like that. Look around job posts today! Many CTOs can be incompetent. I'm an early adopter of X tool and encountered all the gotcha, and then I see these technical leads put X tech on their stack which they will regret later.


Besides this (incompetent heads), it seems nevertheless that will follow a wave of lay-offs in the IT sector after the ChatGPT "revolution", at least in this region. This tool put -cost reductions- ideas in many clam heads. I'm hearing too many rumors around not to be true.

Not only devs, but also designers, testers and data analysts.


It's easier for ChatGPT to replace MBAs than developers ;)


I think you have really come to the wrong conclusion. I don’t know how much impact this tech will ultimately have, but it is extremely capable and surprising. It might not be the same as a reliable expert or highly regarded textbook, but it is capable of giving a very specific answer to a question you have, like a middle of the road private tutor. If you do any kind of writing, it can slam through writing blocks. It can pair program like a beast. As long as YOU know what you are doing it can accelerate you.

Most people are very intimidated by a blank page or empty code editor. Most people need a helping hand and someone to help them navigate a problem. Most people have a lot of difficulty producing individual work on their own.

Think of this like flipping the other end of a ping pong table up so you can bounce the ball back. Sure there’s no real person there, your shots are just being reflected back, but without something to bounce against, there’s no way you can play solo. Maybe that’s not you, but it is a lot of people.

These recent AI developments are the discoveries of the decade. Let folks have their hype.


I majored in Cognitive Science back in the early 90's at UCSD. I got pretty disillusioned at the combination of barking up the wrong tree, model-wise (The Bitter Lesson* had not yet been learned) and the just-not-powerful-enough compute we had available to truly test more interesting methods. Switched to dev work to pay the bills, but always hoped a real breakthrough would happen in my lifetime.

The past six months have had me excited about tech for the first time decades. I'm just happy to be here, experiencing it.

* http://www.incompleteideas.net/IncIdeas/BitterLesson.html


Have you ever in your life seen the release of a tool that can save about 30 minutes per day from most office jobs and possibly more in the future? I haven’t and never expected I would.


Google search?

Dropbox/google drive/etc?

Git maybe, although obviously not for all office jobs.

GPS navigation?

Keurigs and instant pots?

I strayed a bit from “office jobs” but yes I have lived through a number of things that save 30 minutes a day for a lot of people.


I'm not pro or anti this, just mildly interested. So, how can it save 30 mins per day from most office jobs?

I get how it could replace certain jobs almost entirely, if those jobs involve churning out a lot of bullshit text. For anything that actually matters, checking/fixing ChatGPT output is going to take roughly as long as doing the research/writing oneself to begin with.


> I'm not pro or anti this, just mildly interested. So, how can it save 30 mins per day from most office jobs?

It could help write emails faster, especially ones that require a lot of care. Or summarize meeting notes.


How so? What takes long is the times it takes to reflect on what you want to express (basically what you would prompt to chatGPT), not the actual form.


> I get how it could replace certain jobs almost entirely, if those jobs involve churning out a lot of bullshit text

Poster's name of 'sanitycheck' happily checks.


I'm old enough to remember when Zoom started saving me and all of my coworkers 60 to 90 minutes per day.


I get what you’re saying, but who is saving time here? If I spend one minute to have ChatGPT write a message to someone else (a real person), that someone else has to spend more time reading my email than I spent writing it. This adds a superfluous layer over what otherwise should have been direct and clear communication. (a.k.a “meeting that should have been an email” scenario)


Is remote work a tool? It saved me waaaay more than 30 minutes a day. And those were unpaid hours too.


Spreadsheets and word processors saved a lot of time compared to their analogue versions.


Meh. It’s a normal part of the hype cycle around a piece of tech that is having its big moment on the stage. Some years back the HN front page was all Docker all the time.

I think it’s actually not a bad thing if there are ‘too many’ posts around a particular piece of tech. If something sticks for a while I usually take it as signal to investigate. Other things have a brief moment of hype but don’t actually stick around too long if you watch the ebb and flow.

Instead of getting upset, consider spending less time on here. :)


ChatGPT has way broader public appeal than Docker and that's why it's frustrating - a bunch of fluff articles cashing in.


Don't worry. The overload will disappear in a month, and there will be only a monthly article when someone discover something new. It happens all the time. For example, after Apple Day, the front page is all about Apple for a week.

I prefer/recommend not to use the "hide" button too much in the front page, but in some case you can use it for a week if the ChatGPT posts get too annoying.


Meh, new macbooks got announced a few weeks ago and that was buried pretty quickly, even though they're probably the best computer you can buy


The M2 MacBooks was a boring update though. The front page around the M1 release was a lot different.


On the one hand yes. Enough already.

But on the other hand, no. This is actually a significant technological development that has real implications, positive and negative, for our entire society. It's not some BS nonsense like blockchain. This is almost as big as the rise of smartphones. We need to be talking about it.


That's right, blockchain is there to unchain everyone from the banking systems, openAI unchains us all from the need of people altogether. Can't wait to get food drone-delivered before I even know I'm getting hungry.


In short order I expect it will be clear this technology is much bigger than smartphones.


No, you are not the only one. And in fact I had the same thought that bullshitters might particularly appreciate ChatGPT since it does what they do for them. Quite useful in their opinion.


When given a hammer, everything looks like a nail. ChatGPT is a really convincing hammer that is still pretty unreliable.

But I'm worried when it gets to the point where a future ChatGPT is no longer that unreliable, and then millions of people given that hammer will feel justified enough to keep ranting about it and injecting it into every random forum thread and comment section forever just because it's possible for them to.

Yes, I'm tired of it, because it's not an endgame tool but many treat it like it is. And I have a feeling I'm going to get even more tired of it the more it improves - that is, the more people it's able to convince it's no longer bullshit.


It’s way better than the constant headlines about Twitter drama.


Okay folks, buckle up, because I've got a doozy of a story to share with you all in regards to the ChatGPT headlines. So, I was at a conference the other day, and one of the speakers was demonstrating a language model like ChatGPT. And of course, being the AI enthusiast that I am, I was eagerly paying attention. But then, the speaker asked the model to generate a response to the prompt "Why did the chicken cross the road?" And you won't believe what the model came up with: "To get to the other side, where the grass is greener and the AI is more advanced." I mean, talk about a smart-aleck response!

But here's the thing, that little anecdote perfectly exemplifies the excitement and novelty surrounding language models like ChatGPT. Sure, some of the answers generated might seem like "smart sounding bullshit," but that's part of the charm! We're still discovering the capabilities of these models, and it's exhilarating to imagine what else they might be able to do in the future.

So let's not get tired of the ChatGPT headlines. Instead, let's continue to engage in discussions and explore the potential of AI technology. The future is full of opportunities, and language models like ChatGPT are going to play a major role in shaping that future. Who knows, maybe one day they'll even be able to write their own hilarious chicken crossing the road jokes!


"you won't believe what the model came up with: "To get to the other side, where the grass is greener and the AI is more advanced."

I would be impressed if the LLM was a fresh never seen blob. These models have been in development and have been massaged by humans for years. It wouldn't surprise me if one of the developers added it to the system as a test case or an easter egg.

You have to keep in mind that these apps have zero intelligence. They are not smart. This is the hype cycle of these products. The AI companies need money, customers and developers so they are trying to get everyone excited about the product.

OpenAI's CEO is talking about how General AI will change everything but I'm sure he knows that LLMs will not get us there. On a scale of 1 to 10 we are at 1, if that. They are impressive but they have no cognition.


It's definitely starting to feel a bit like crypto. The hustlers take less and less time to hop onto new trends these days.


There are definitely hustlers on any new social phenomenon, but the difference between crypto and LLMs is one of them actually serves a useful purpose. I've written regular expressions, found hard to spot bugs, and summarized meeting notes using GPT in a fraction of the time it would normally take.

That's genuine utility value. The only utility I've ever had using crypto is buying mushrooms over the internet. Now with them decriminalized in my state and easy 1-2-3 spore kits available, I don't even need it for that any more.


I'm pretty tired of seeing the ChatGPT headlines - right up there with Harry and Meghan, and Kim and KKKanye.

There is nothing intelligent about ChatGPT, it's just a statistical trick, no understanding, no unique ideas. It's this years's self driving car.

Speaking of which, I'm been waiting for my self driving car since 1982. At this rate better make it a hearse.


The self-driving AI hearse: You feed a model with all of your car driving routes over your entire lifetime and then put your body into a autonomous hearse that drives your corpse to where it probably wants to go.


I’m sick of hearing about “original ideas” as if the typical HN a commenter has very many at all. You don’t need to have “original ideas” in order to create immense value. Everyone will sit here making a few hundred grand a year and tell themselves it’s because of their original ideas. It honestly probably isn’t. New automated systems that create immense value are worth a bit of fanfare. Self-driving has a fairly uniquely harsh failure mode that means that a worthwhile implementation has to be rock solid. And frankly we’re all sick of hearing Elon Musk pretend otherwise. Outside of self-driving, ‘less than perfect AI’ has wayyyy more opportunity to provide net value. Not being able to see this difference is comparing surface-level properties of two things and drawing a conclusion. The sort of stuff this current generation of AI can do fairly well.


It's so exhausting and over discussed that I made an HN reader app for myself just to add topic filters

(kind of a plug, I haven't updated the Repo in a while and just run it locally).

https://github.com/rsimmons1/FlutterHnReader


I am tired of the articles because they are like 90% redundant.

Plus, on a different note, I fear this mega hype will eventually reach those that decide in a company, and most of them in my experience do not possess proper technical knowledge, which will trigger them to buy shitty AI-generated/based products because of their obsession to save costs, scale a company and replace people.

The way I see, medium/big companies do already or will personalize this AI for their own domains and package it to the end users which will have to interact not anymore with a human but with a machine that can't yet reason very well.

How often do we complain that "algorithms" deleted an account and all that? I imagine chatGPT will make all this and much worse available on a much larger scale.

This is also why I find these articles pretty annoying.


I asked ChatGPT to comment on your post, this is what came out:

As an AI language model, I don't have feelings or personal opinions. My purpose is to assist and provide information. If you're feeling overwhelmed by ChatGPT headlines, perhaps take a break and come back later.


Yes and no. Some are still amusing in a perverted sort of way.

Like CNet generating their articles with ChatGPT... from the few times I've read CNet lately, I bet that didn't change their article quality one bit. They were blogspam, they stayed blogspam.


I agree, but I'm intrigued by the _patterns_ of headlines.

At first, they were all in the format of "I asked chatGPT to write a blog post, here it is" and "I used ChatGPT to talk to my mom so I don't have to"

Then it quickly moved onto people pushing out prototypes "Use ChatGPT to generate pick-up lines" and "ChatGPT as a therapist"

And now, more recently, it's all about taking down ChatGPT. ChatGPT is flawed, ChatGPT can be detected, ChatGPT is a dirty liar, or some other library is better than ChatGPT.


Yes, I agree with you. I think this is WAY overrated. I also think this is as good as ChatGPT will get, a lot of people seem to think it will get better and better.


>I also think this is as good as ChatGPT will get

You mean that language models will never get better than ChatGPT? Or that ChatGPT itself won't be improved.

Because I guess the latter could end up being true if openai decides to call whatever user facing tool that GPT-4 powers something other than chatgpt.


The post titles are a bit repetitive, but I find the conversation with HN folks dissecting the applicability and limitations of LLMs pretty interesting.


In January of 1806 in the article "Essay on False Genius" in the European Magazine and London Review, there was a statement that there is one fool born every minute. Funny, how 200 some years later has not changed that too much. Artificial intelligence is still artificial no matter how intelligent that it seems. A good predictor, yes, 100% accurate, no. It could be a billion dollar effort or a late night experiment that is trying to pass itself off as the REAL thing to replace the human brain. I will choose to see it as a scene from a circus, telling us of the greatness that we are about to witness, and shake my head at the failures that have not lived up to their grandeur. Patience, my friend, someday we will exist as a utopian society, someday.


A big part of the reason why these posts float around on the front page is because they get engagement. A whole lot of the engagement is exactly the sort of engagement your post is getting. You’ve invited another ChatGPT conversation, exactly the same as all the others. Your post is the same as myriad first-level comments on all of the posts you’re referring to.

I’m sick to death of seeing the concentric circles of HN readers parroting the same critiques of ChatGPT over and over again. “It’s a language model!” they’ll say, and with each iteration there’s a higher chance that the person that writes that has no idea what the implications are. The comments usually end with a hand-wavey explanation as to how ChatGPT having flaws means that it won’t be remotely disruptive. There’s usually some weird elitist / classist vibe to the comments (effectively: “none of the jobs that actually matter will be impacted”). Then there’s some other hand-wavey blah-blah said about creativity. As if the typical HN user, myself included, isn’t a rank and file software developer / manager at Tech Company 1527.

The funniest thing about it is that these comments are all written presumably at least in part because the writer thinks that they’ll provide value. You wouldn’t even need GPT-3 to construct one of these comments. Previous-generation models could do it.

I work in the education sector. I’ve no doubt that ChatGPT et al are going to have a long-term impact on the product that I work on. There are always kids that want to find ways to cheat. Before I get lectured by someone with absolutely no relevant experience: ChatGPT can write wholly believable essays in various genres.

We’ve all been worn down by AI hype for years and years now. We’ve especially been worn down by all of the Elon-bait around FSD. To let that evolve into “be unproductively critical of every AI advancement” is…not original thought.

I’m sick of the recycled BS ChatGPT think-pieces yep. But they’re no different to the comments that put them in the front page, including my comment. They’re also no different to a bunch of BS thinkpiece articles on HN.


Yes. Some have been hyped so much that they make absurd claims about establishment of AGI because of it.

It's certainly entertaining, albeit Midjourney is more exciting imo, but there are far more challenging problems in the field of Computer Science and sciences in general than NLP.


Large language models in their current form are limited, but within those limits lie a thousand use cases. This is unlike blockchain. Expect to hear more and more about specific use cases after the people pointing out the obvious limits have run out of steam. (Oh, it can't construct a novel proof? Why would you even think it could?) For example, I think many books will be written with the help of large language models, especially memoirs. Speeches will be written with LLMs. Comedians will flesh out jokes by giving LLMs the setup and the punchline and ask for the rest to be filled in. Memos and documents that aim to be spare, information dense, and accurate don't benefit much from LLMs.


I’m not but I’m also not terminally online. I’ve only recently had time to start playing around with chatgpt myself so I’m still enjoying the novelty of it and reading about others use cases and experiences.


I want to know more actually. I want good use cases for it. For me it’s like being given google in 1995. How can I use it - what’s it good for, what’s it bad for and how do I formulate good queries?


ChatGPT might be the next tech gold rush so a lot of people who come to Hacker News have a vested interest in hyping it up.

“ChatGPT but for whatever” will probably be the most effective way to separate venture capitalists from their money in the near future, much like “crypto for whatever” and “Uber but for whatever” and “an app but for whatever” in the past.


Yeah we're in the hype stage right now with it, the OpenAIs/WallEs/Midjourneys/etc. There's also a lot of unsavory types (i.e. on 4chan) that want it to be something because they want to see creative types (e.g. artists) suffer and lose gainful employment. There's so many different sides to it, but ultimately I think in a year or so hype will settle, headlines will fade, and it will begin to settle into niches it excels at while ultimately not changing a whole lot in the bigger picture.


Not really. The more attention it gets, the more incentive to make it better.

Unrelated, the irony of this post made me chuckle. By posting this, you're adding to the ChatGPT "headlines".


I'm not tired of ChatGPT headlines, but I'm definitely tired of the mass of people that think it is way more than what it actually is (spitting out text based on statistics).

People see it as the second coming of Christ, as the Google killer, as a disruptor of educational systems, etc. when all it is is something that's good about making bullshit look legit.

AI has always followed a cycle of hype and then total lack of interest when people realize it's empty promises. ChatGPT isn't changing that.


ChatGPT is like google/stackoverflow/wikipedia on steroids and it will get only better over time. there might also be specialized models for particular jobs, this will save people time on a lot of routine work and allow individuals to create apps that would take groups of people to make in past...

personally, I like it and see it as a big win for reality where tech at least from my perspective of an average joe seemed to stagnate for years..


It's current trend. It will vanish with time. Like AI generated pictures, Musk and Twitter, Ukraine, covid...

Yes, trends are usually not very "deep" or intellectually enriching.


Yes.

Thanks to Firefox + uBlock Origin, _the name of the CEO of Twitter and SpaceX_ name has completely vanished from my everyday browsing.

I can do the same for ChatGPT. Or any other thing or topic that swtiches from initial hotness to constant noise.

edit: it is so effective that if even affects my own commenting here. I had to replace its actual name by "he name of the CEO of Twitter and SpaceX" to be able to post my reply.


if we remove ChatGPT, <existing software> written in rust and "X fired Y% of their workforce", there wouldn't be much left.


Yes, I’m tired. It’s basically become a buzzword about something whose capabilities are sounding overblown more and more.

Just waiting on that bubble to burst.


No. This stuff is cool. If our species spent more time on building and _caring_ (the most important) about stuff like this then we'd be on our way to the utopia we dream of.

Some people may be annoyed by it I suppose, same as people get annoyed when they see new JS frameworks/libs.


If there's something HN loves more than anything else, it's buzzwords. Right now it's ChatGPT, but it was Deep Learning, Machine Learning, Blockchain, etc. ChatGPT is almost a welcome reprieve from "How to use Machine Learning to wipe your ass better" and "Why you should integrate the blockchain when selling hookers in Second Life."


I think ChatGPT is controversial because, like other past controversial technologies, we don't know whether it will be good or bad for society on the whole.

If you don't notice how much money-momentum is behind it, you're burying your head in the sand. I think it's an interesting time because we are seeing how the market squeezes value out of emerging technology.


Nope. I'm interested to see the ways in which this will change our labor market, consumption patterns, and society. I've heard from people who are using it in their jobs and I'm curious how I could use it too. I'm also interested to know how it will affect future generations, and what can be done to prepare students for an AI-enabled economy.


I reduce twitter time because of it. Also they are here on HN posts a lot recently so I seem to spent time less to read HN as well.


I am tired of them and consider them infomercial.


We're you not here when it first launched? I feel it calmed down now and is at acceptable levels.

It's an amazing tool nonetheless.


Reminds me of 3D printers, bitcoin and other fads, while this seems more hopeful for practical use, but we will see few years from now... Judging by perplexity.ai results it's pretty much useless to me for specific questions.

Anyone remember that voice conversation app everyone was pushing and I can't even remember its name?

edit: Clubhouse


Yes.

Yes, ChatGPT is largely overrated. Yet it is still a modern marvel. It WILL change the world.

But most of the stories about it are still garbage and I'm very tired of endless breathless and misleading headlines.

Anyway, Chinese balloons are infinitely more important.


Yes. We all know the power of ChatGPT, or even GPT-3 itself, and how it's impressive and so on. But seriously, it something we all anticipated that will happen in the future, and science is exponentially becoming better and better. It's the era of AI, I get it, but how much can you repeat the same headline?


I ignore them all. The bad thing is that I've also started seeing them in non-tech focused sub-reddits.


I love ChatGPT. It does a lot of “dreaming” but it is generally helpful. It is a bit like having a friendly know-it-all who has read all of StackOverflow but can’t remember everything in detail (yet supremely confident nonetheless).

What problem domain do you use it in that you find underwhelming?


I signed up for GPT, asked two questions and this was the end of it.

The answers were "...it is not scientifically proven" BS.

Something I already proved with my own life, GPT is not willing to answer and risk its good, politically correct name for.

Google is here for stay until people are ready to give up spoon fed BS


If by Google you mean Brave Search, Yandex, Kagi and others then yes, Google results are extremely censored.


well, what did you ask?


Definitely something racist or otherwise offensive.


Yes I am.

Though I'm not sure it's quite as bad as the countless Wordle headlines from a couple years ago.


I feel like it will always get traction as it preys on the underlying fear of job security of Devs

I have genuinely found it incredible in certain situations. I don't feel like I am getting as much value from it as others might be so for the moment I like the discussion


Well, it also preys on the fear of writing emails and docs, and might help devs do more dev stuff if they want, so there's that


Absolutely not. This is a pivotal moment in culture and I'm coming here partly to be kept up to the minute on it. As technologists we owe it to ourselves to be aware. So much is changing every day. I feel like I'm not consuming enough.


I like seeing how people use it creatively and see it’s limitations and what will come next


I treat it the same way as XXX written in ZZZ. Sometimes I read it if XXX looks like it may be interesting but mostly I just skip it. Hence not really tired as I can always read something else. Posts on all kinds of things are plentiful here.


Well, a few years ago it was 80% blockchain headlines, at least GPT has practical uses. If it bothers you, set yourself up with a custom rss reader that has kill words -- that's what i did when i was tired of cryptoheadlines.


Absolutely. We’re simply in the “post links for karma” phase, but it’s still annoying.


.. nope. block)element("*gpt" replace"//_____//"


>the people who like to make smart sounding bullshit

that's most of the ML/AI community


Not even a little bit. We've only scratched the surface of what tech like this can do.

There are some tropes that need to go away, though. Most of them some form of lame middlebrow dismissal.

- "It doesn't actually understand": nobody outside of a small niche cares how the sausage is made, you are not making a substantive observation that hasn't already been made by 100,000 other people.

- "Most of the answers are bullshit": factually incorrect and the growing sidebar of my chat history with it can prove as much.

- "Okay if not bullshit then so riddled with errors as to not save any time or be useful" see above.

I would suggest that people who have such a hard time with content they are not personally interested in to make use of the hide button rather than spewing hate all over every related thread.


INT. CONFERENCE HALL - DAY

Paul Graham (PG) walks onto the stage, the audience applauds.

PG: Thank you, thank you. Good afternoon everyone, it's great to be here today to talk about AI. But today, I want to focus on a specific type of AI that has taken the world by storm - ChatGPT.

Cut to:

INT. NEWSROOM - DAY

A JOURNALIST is typing away on his computer. He looks up, frustrated.

JOURNALIST: (to himself) Another day, another set of ChatGPT headlines. I mean, what's not to love?

Cut to:

INT. CONFERENCE HALL - DAY

PG: Now, I know what you're all thinking. "Oh no, not another ChatGPT talk." But I want to give you a different perspective.

Cut to:

INT. DINNER PARTY - NIGHT

PG, surrounded by friends and colleagues, is holding court.

PG: You know what, let me tell you a story. I was at a dinner party the other night, and someone brought up ChatGPT.

FRIEND: (excitedly) Oh, I love ChatGPT!

PG: (smiling) Well, I decided to have a little fun and ask the model some absurd questions. And you know what it responded with?

FRIEND: What?

PG: (grinning) "Why did the tomato turn red?" "Because it saw the salad dressing!"

Everyone at the table erupts in laughter.

Cut to:

INT. CALL CENTER - DAY

PG is observing a customer service representative at work.

PG: (to himself) And then there was the time I watched a machine powered by ChatGPT resolve an issue faster and more efficiently than any human representative.

CUSTOMER SERVICE REPRESENTATIVE: (smiling) Well, it looks like ChatGPT has done it again.

PG: (nodding) Indeed.

Cut to:

INT. CONFERENCE HALL - DAY

PG: So, instead of complaining about the constant stream of ChatGPT headlines, why not embrace the excitement and keep pushing the boundaries of what's possible with AI technology? The future is a blank canvas, and it's up to us to paint it with the limitless potential of AI. And who knows, maybe one day, these models will be able to write the next great American novel, or maybe even perform open-heart surgery. The possibilities are truly endless, even if it means cracking a joke or two along the way.

The audience erupts in applause, and PG takes a bow. The scene ends.


No not really. It's fun to learn the limitations, and figure out how to exploit its good qualities. It can't be funny, but it can describe concepts in different levels of detail


Seeing all the headlines of little MVPs that people have hacked together once a new tech arrives is what hacker news is all about

So no, more crazy MVPs and open source libraries please


I am very interested in learning the various open source packages useful to build apps around GPTx, so instead of being annoyed I quietly bookmark these.


in a way, we've been reading chatGPT headlines for years

every moderately large news organization has been using A/B testing for over a decade to choose how to present the headline, with sometimes tens or hundred of possibilities being quickly swapped out/tested as people click, training "models" for presenting the most catchy headlines

now we just have a 'confidence percentage' before actually publishing


I am fine but if I see one more "Let's see what's chatgpt has to say about it" comments! I am gonna get this account banned.


I've been ignoring all news about this thing, here and everywhere, almost since the hype started. It just seems so boring.


I'm not tired of the headlines. But I do find them somewhat depressing in that some people don't seem to realize that it is an incredible BS generator, and instead treat it like an incredible knowledge engine.

It literally does not "know" or "understand" what is true or false. It is just capable of imitating true-sounding things (which often happen to be true). That's a massive difference, and it's dangerous that people take what it says as true.


I dislike the low effort content.

"I asked ChatGPT this question and now its my medium column"

etc etc

It just seems at the moment that most of the content is like that.


We are currently experiencing the greatest revolution in technology, perhaps since computers, perhaps ever. People are not going to stop talking about it. As the technology becomes more sophisticated, it's going to subsume everything.

We have created intelligence. We aren't alone in the universe anymore. At this stage, it's still an "incredibly well-read moron", but it's still intelligence.


Don't worry. By the end of this year we are either in the next AI winter or we're past the singularity.


Yes. It is already one solution in search of a problem, like VR and augmented reality. An eternal fad.


I believe that OpenAI has the potential to become the next Google if management doesn't fuck it up


I'm more likely to believe that MS manages to make Bing viable competitor via OpenAI's tech


I dunno, the results from perplexity.ai are even more unusable than from Google and that's something, so the technology still seems to be years away to provide decent results.


it's far easier for google to copy chagpt than it would be for openai to copy google

especially because it's based on google tech

Gmail/gsuite android maps YouTube


Yes so so much. I couldn't care less about it. Just want to read about something interesting


Not really, the technology has some very interesting potential, especially in the helpdesk space.


Ask HN: Are you tired of headlines asking if you are tired of reading ${POPULAR_THING}


Aside: I dig your username! ;)


thanks! It's fun to try to pronounce. Do it enough and you will never forget the layers of the OSI model again!


It's burned into my brain because I recited it over and over learning it for uni exams years ago haha!


Not at all. Imagine being tired of "alien headlines" if aliens made contact.


I haven't seen a product more talked about in mass media than this one in the last decade. It's safe to assume that this is the defining event in the history of AI that changed it all and one of the breakthroughs on par with the invention of the internet as we know it


Not as tired as I am about the constant complaining about it.


Better than Elon headlines. This too shall pass.


Nice try, OpenAI


Not really, no. It's an awesome tech.


Yes.

Thanks to Firefox + uBlock Origin, "Elon Musk" has vanished from my everyday browsing.

I can do the same for ChatGPT. Or any other thing or topic that swtiches from initial hotness to constant noise.


not at all if its an original take


I am not


No.


Would you rather go back to Elon Musk?


Yes.


yes


I never understand the point of these kinds of submissions. HN isn't for you alone, and there's a whole voting system. All you're going to get here, at best, are a handful of likeminded people who agree with you, reinforcing your bubble even further.

Just vote and move on, hide the submissions if you have to.


I'm more tired of the headlines trying to downplay how amazing it is.


There are plenty of people who think “it” is overrated. Those people are mistaken, but by the time they catch up and say “oh, wow, this is a total disaster, how could I have known?” it will be too late, but at least they were tired of the headlines.


I understand your opinion about ChatGPT and its impact on the headlines. It certainly appears to be a popular topic of discussion, but I think it is important to take time to evaluate the accuracy of the answers it provides and the impact it has.


written by chatgpt?


No, this is the biggest technological breakthrough of our lives, for better or for worse.


I have read this exact sentiment about blockchain and crypto so many times in my life.

I lived through the rise of the web, mobile internet, and the smartphone so I have my doubts about ChatGPT being the biggest technological breakthrough of my life.

Not to mention all of the things outside the realm of computer science competing for that title. Fusion power, solar power, alternatives to antibiotics, cures for diseases that kill millions, etc.


Yeah seriously, unless you're really young, the internet and the smartphone are absolutely seismic cultural shifts that changed everyone's lives. Compare 2023 to 1993.

LLMs have a long, long way to go to get anywhere near that. All the predictions about replacing jobs are making dubious assumptions about radical improvements over the current state of the art. Maybe that will happen, maybe it won't.


> LLMs have a long, long way to go to get anywhere near that.

The question in my mind is: is the curve just linear or is it getting closer to exponential at this point?


There isn't a curve at all. It's discrete steps. Which is why so many people freak out when there's a step change, because they extrapolate the existence of a curve where there isn't one.


Come on, a curve can be drawn over the steps. I'm a maths dumb dumb but..

https://en.wikipedia.org/wiki/Interpolation

Here is ChatGPT's response fwiw.

Prompt: is the curve of improvement in LLMs linear or more exponential?

ChatGPT: The improvement in language models has generally followed an exponential trend. With the advent of deep learning and the increasing availability of large amounts of data, the performance of language models has improved dramatically in recent years, leading to breakthroughs in various NLP tasks. However, it is important to note that the improvement curve of language models is influenced by multiple factors, including the amount of training data, the size and complexity of the model, and the computational resources available. The improvement curve of language models may also become more linear as the state of the art approaches certain limits.


Got caught by edit time limits, mid-edit.

> The improvement curve of language models may also become more linear as the state of the art approaches certain limits.

This part of ChatGPT's response is the crux of my original question.

It might be notable that I was able to focus and possibly learn from the response to my question via interaction with an LLM, and not via interaction with a human.


smartphone without internet doesn't have much use, so what you really wanted to say it's "internet and convenient mobile internet access", not the smartphone (we had phones with mobile internet before but it was quite inconvenient before big capacitive touch screens, even resistive touch screen was PITA), imagine who would stare at smartphone without internet


Smartphones have come and will go, but AI will be with us forever. I don't see any fusion power being used, but I do see a chatbot that can do most of what I can.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: