Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT user sessions are down 29% since May (sparktoro.com)
80 points by nickwritesit on Aug 30, 2023 | hide | past | favorite | 80 comments



This is not as concerning as it might seem.

First, massive hype drove tons of users to try this flashy new thing. The talk of the town. But the vast majority of them were not genuinely looking to integrate a new tool into their toolbox. They just wanted to know what all the fuss was about.

We see the same thing with Meta's Threads app. Massive hype created a huge spike at first, but the majority of those users (myself included in this case) had no real intention of making Threads a daily part of their life. They were just curious to see what the big deal was.

I'm willing to bet (but obviously don't have the data) that the number of users who use it multiple times per week (if not multiple times per day) is climbing, and climbing fast.

Second, I'm less interested in knowing ChatGPT's adoption rate than I am in generative AI's adoption rate overall. ChatGPT was the introduction for most people, but for those of us who chose to make it a core part of our toolbox we quickly discovered a wide array of other incredible generative AI apps.

Finally, I think anyone who is actively using generative AI for their work is never going to go back. It's too darn useful. It may take a while for the general public to wrap their head around how to make this a part of their lives, but for those of us who do, the benefits are profound. We see the reality of it and don't need to see trend numbers to inherently grok what's happening on the ground.


It's replaced StackOverflow for me.

Not having to deal with the attitudes of: "Why are you doing it this way" thrown at you while standing in front of an online audience has uplifted my worry on programming.

With GPT, having a non-judgmental "you could do it that way, but what about this way?" is really pleasant. Especially for someone who's not a dev/programmer.


It's replaced SO on both the low and high end of the complexity curve for me.

"Low" -> GPT presents much more succinct answers with less to wade through for things like "XYZ Error".

"High" -> I'm able to "prep" GPT by asking about certain topics and how to accomplish them and then have some back and forth to discern a plausible solution in a way that's much more contextual.

In both cases, the other big win is that I don't have abstract out my questions to some generic but representative case, I can just provide exactly the code or scenario I'm working with and get answers back. Huge productivity gains.


this is pretty much how i feel too.

it's very weird to think that i've chosen to replace actual humans with a bot because the bot is nicer.


I don't have any trouble at all wrapping my head around the idea of a chatbot. I just don't have any use for it. And this may also be true for a large part of the general public.


I use both GPT-based search (<5% of the time), which allows me to ask lazy questions, and summarisers. It saves me some amount of scrolling through search engine results.

I would be very interested in an integration with Wolfram Alpha and an LLM; I heard Stephen Wolfram talk of this on Lex Fridman’s podcast. Essentially, I still open Python as my calculator, and my search engine for currency conversion; quick access to data sets and ease of composition is not in my toolbelt yet.


> I would be very interested in an integration with Wolfram Alpha and an LLM

GPT-4 already has a Wolfram Alpha plugin.


ChatGPT was the only ball in the game of soccer, now we're playing dodge ball, and there's multiple balls. Personally, I use generative AI daily but very rarely chatGPT directly.

For modern info/searches/updated knowledge I'll use phind sometimes bing as an alternative.

I use claude for writing - its way more creative/expressive I've almost written a full novel using this and it actually doesn't suck like most AI generated thigns, albeit - I'm breaking things down into pages, and being very specific on minor details/etc... it's just helping craft the visual imagery like descriptive writing etc, and I'll add my own flair on some parts that I'm more passionate about.

I use amazon codewhisperer for coding, and some other vscode extensions that aren't openai related that do a pretty decent job.

I think the questions should be -- has generative ai using language models increased or decreased?

Like you mentioned meta's threads, this would be akin to - asking the question - has social media posts across threads, and twitter averaged less than before threads, about the same, or more?


Curious, did you try GitHub copilot before switching to Amazon?

If you have, what are the differences?


Thanks for the heads up on Claude; I attempted something similar with ChatGPT in the early days but was limited to one paragraph at the time, and it kept injecting a lot of repetitive noise like the character constantly looking forward to the challenge ahead.


Courses that can help people work with genAI are sorely needed. At the same time, hardware/software integration with these technologies is really lacking. There hasn't been an iPhone moment for generative AI yet


Most web sites would kill to have a >10% retention rate. It sounds like both ChatGPT and Threads are well above that number.


I haven’t found any good use cases for chatgpt.

I tried using it as a replacement for a search engine but having to double and triple check the output generated became tiresome.

It kind of works for writing simple functions but anything beyond that I found it gets it wrong. Again, double/triple checking became tiresome.

I wonder if any other professional fields have found a way to improve their workflows with it?


> I wonder if any other professional fields have found a way to improve their workflows with it?

It’s absolutely incredible for content creation in a few ways:

1. For content posts, a good prompt can turn a 2-hour writing session into a 15 minute editing session.

2. Creating texts in foreign languages. This can be content, ads… anything. It’s still better to run whatever you make by a native speaker, but they typically only need to edit rather than create. As such, the time/cost per unit of content is much lower.

3. Combine AIs to make multimedia content — for example, one AI for a script, another AI for voices, and another AI for images. We have only scratched the surface of what is possible here.

Fwiw, I just started a project using multiple AIs to develop content for foreign language learning. It’s incredibly powerful and ridiculously fast compared to previously existing methods.


>but having to double and triple check the output generated became tiresome.

There are things that you can verify instantly. Example: "in gnu units, how do you convert days to hours and minutes". After 3 googles I could not find the syntax to use multiple output units. ChatGPT got it in one try.


Thank you for mentioning this example. I never knew units had that feature! (Also, to save the next person the few seconds to try out the most plausible symbols, it's "You want: hours;minutes" with semicolon.)


I find the problem with the errors (in 3.5) is that they're not the kind of human-made error we're used to detecting, so the process of parsing the responses feels more arduous and sometimes confusing.


One good use case my friend shared was "create a list of CEO/CFO/COB of the following companies in this format (csv file format)" and send a list of companies.

Someone was going to write a scraper to look all this information up based on company sites or SEC reporting, but chatgpt4 was able to do this in less than 5 minutes (of course YMMV for veracity, but in this case an 80/90% solution was a much better start than spending a day scraping hundreds of sites.


Are you a paid subscriber? Like are you using the GPT4 model?


GPT4 gets stuff wrong in the same way, you just have to pay $20 a month for the privilege.


In my experience going from 3.5 and 4 is going from generally useless novetly to indispensible team member. Yes it gets things wrong, so do I! My experience of working with GPT4 (except for a couple of weeks back when they made some change that made it SUPER dumb from which they seem to have recovered now) is that it's a bit like having a mid-level staff member that understands instructions really well, works almost instantly and sometimes makes mistakes.


One brilliant use for it is to rephrase your emails in corpo speak.

Write short email: "where the fuck is X, you promised it will be here by monday, i will complain to management if you dont deliver"

use chat gpt to convert it into polite reminder email that will hit you will escalate to management, without any potential HR issues.


I use it for everything. Random questions/research, generating code, documentation, short stories for the kids/fun, etc. Good enough it is quite often good enough.

It only three current limits, or I'd use it even more:

- Length

- Ability to import data/documents (Data Analysis is limited)

- Nothing newer than 2021.


Twice a month I'm writing some contrived recursive function, or basic unit test suite and think "Oh boy, actually something I can ask chatGPT"

Otherwise, no.


Try phind.com, it's chatGPT with better search.


School (Secondary and University) is out for summer break....


This is the major explanation I've heard. With Fall semester starting, I wouldn't be surprised if it ramped back up considerably.


Exactly. Let's compare April with October and we'll have a meaningful statistic there.


Having country-level breakdowns of user sessions can easily help determine if the decrease is due to school seasons ;)


I heard this excuse before, but aren't many schools back already?


> I heard this excuse before, but aren’t many schools back already?

Today (August 30), yes, most are back.

But, the stats are for July, where (other than summer sessions, which are usually optional or remedial) most schools are between academic years (even “year round” schools often have a somewhat longer break from late June through July compared to other breaks, to accommodate a brief optional/remedial summer session.) Most schools return with a new school year sometime in August or occasionally early September.


In North America (at least Canada, I assume US too?) school year starts next week after Labor Day. While I know staggered school returns from some parts of Europe and some might be back in session already, universities often don't start lectures until October.


No? In the US, nearly all schools start in late August or early September.


Apparently (according to a 2023 survey [1]) the median is more like mid-August (14-18) and a couple even start in July :o

(Your point is still right though, <5% had started school when the ChatGPT data was collected)

[1] https://www.pewresearch.org/short-reads/2023/08/25/back-to-s...


The article does not exactly state what is measured, but suggests it measured web traffic (browser, I assume).

It’s very bold to state “ChatGPT user sessions went down by X” when not taking into account:

- iOS app launch (May ‘23)

- Android app launch (July ‘23)

- GPT3.5 turbo and GPT4 API general availability (July ‘23)

- the explosion of apps that users shift to after 3.5-turbo/4 API GA

I worked with ChatGPT’s web interface daily before the mobile app was launched, and my usage is now at say 70% web / 30% app. And countless extra “sessions” integrated directly in code editor (Cursor)


I think it remains to be seen just how profitable these tools will be. I think that the opensource community's efforts have been amazing, and these efforts will likely erode the possible profits of megacorp solutions... at least, I hope that turns out to be the case.


Someone from Meta alluded to it the other day, and apparently there will be some talk about exactly what their strategy is with all this at Meta Connect (in September), but it totally looks like some sort of defensive moat. Rather like how Google gave away Android as part of the effort to stop Windows Phone.

The community has absolutely run far further with all the various models than I think anyone thought reasonably possible, and at least I am phenomenally grateful that this entire episode has not pushed us over into absolute cloud dependency, even if it all started in deeply confused circumstances.


I think hardware companies will make a killing and maybe cloud providers who can deploy the hardware at scale and lease availability. However, pure research seems to me like it'll likely exist mostly in academia and open source circles eventually.

Maybe there's money to be made in the integrations such as visual studio?


A stream of venture capital that keeps thinking it's profitable, funding companies that push the research envelope, go bankrupt and see their work reach the public (intentionally or not) might be the best thing the OS community could hope for.


It would be cosmic justice for companies that looted the open source community to build proprietary products off the backs of that labor to have to go open source themselves because nobody is willing to pay them for their software.


What I really like chatgpt for is "reverse definition search". It's really good at reminding me when I forgot the name of some concept and don't remember it well enough to find something on Google. To give a silly example

-What's the name for that thing that is on the bottom of the room again?

-It sounds like you're referring to the "floor." The floor is the flat, usually horizontal surface that makes up the bottom of a room or any other space. Is there anything specific you'd like to know about it?


Doesn't seem too bad, imo.

The "wow" and "hype" factor are gone/dying down now but I still think its an incredibly useful tool.


Good point. If only 25% of ChatGPT uses were due to hype, given the incredible level of hype around AI, that speaks to having a pretty solid base.


Yep. We're probably seeing a return to the natural mean and it doesn't look too bad for ChatGPT.

I think the main issue they'll have is competition with other models vs AI tools going away. If they can lockin/innovate, I think they'll be in a fine position.


Only 29% is pretty good I would say. I personally would much-prefer to run a dumber local LLM that hallucinates a bit and isn’t too woke.


Then use llama?


FWIW, from my own experience, I used ChatGPT-4 way way more at launch. But the thing is, the way I used it for programming, it taught me SO much over the course of a few months. I'm still using those patterns, practices, and insights in my day to day work, I just don't need to interrogate ChatGPT as much anymore.

I never really used it in a "Here's something I want you to write, go do it." I used it as a tutor and asked tons of questions, and in a way "pair-programmed" with it around a lot of concepts, asked an insane amount of questions, took it's code, rewrote it, feed it back, asked more questions.

When I worked at Microsoft at 18, there was a legendary engineer who blocked off 1 or 2 90 minute sessions per week for me to just ask questions and for us to pair program. That was the most productive (with regards to skill development/ learning) period of my life. Nearly 15 years later, I think my April through June time with ChatGPT (and again, how I used it), leveled me up in a similar way.

I don't really have a strong point to make, just that, for me, it levelled up certain edge skills like crazy, and I don't really need it as much anymore (...until I need to learn a new concept/ language/ whatever). It's wishful thinking, but, I'd like to imagine the decline is due to "AI" to human knowledge transfer


Anecdotal, but I moved my "ChatGPT" usage to bing chat, now use bard regularly, and signed up for Copilot, using it daily. Basically I have spread my usage around. My $0 paid to ChatGPT is now paid through BingChat (MSFT => OpenAI) and my $10 Copilot subscription (MSFT => OpenAI). So I guess they are ahead as far as I am concerned.


Copilot models are not OpenAI IIR


Github Copilot FAQ: "GitHub Copilot is powered by a generative AI model developed by GitHub, OpenAI, and Microsoft."


I am wrong. Please disregard.


I stopped using it because my bank kept putting a hold on my card every time ChatGPT tried to bill me each month. For whatever reason, my bank hates ChatGPT and dealing with this each month just wasn't worth the hassle apparently. (I say this because if it was worth the hassle to me, I would have found a workaround by now)


Oh no, the Effective Altruists have infiltrated our financial infrastructure. Truly, they will stop at nothing.


For quick programming answers, for a while I'd use ChatGPT, but honestly its more in my flow to just use google. That probably would cause my usage to decline.

But for anything stackoverflow-like, where I'm really stuck, and need to formulate a question, instead of dealing with a snarky mod, I just use ChatGPT now...


I got bored with 3.5 but and it's bizarre range of errors, but I'm sure I'll dive in again when 4 is made public. I imagine interest will continue to wax and wane with different releases, especially ones with newsworthy improvements.


4 is public...


Do you have a link? On OpenAI it's $20/month.


public != free


It's become way less useful than when it first came out. Now i cringe at the thought of asking it any question because it just says "well as a LLM i can't answer that with any information that you'd care about".


There was a lot of novelty use for ChatGPT when it got a lot of exposure. I think it introduced a lot of people to the technology, now the people who actually have a use for it are the ones still using it.


Best explanation yet.


This is a bullshit metric. ChatGPT isn't the technological advancement, the question should be has the use of LLM's increased or decreased and by how much since may. People are testing/using other models, tools, etc - doesn't mean the concept is flawed, or that it's going out of style. It just means people want to see the best results and feel that chatGPT is maybe not as good as it was in March.

Phind, and Claude2 imho are often much better than Bing/chatGPT for most of my use-cases.


Wouldn’t you expect this though? This is measuring direct user sessions. As people build on top of the API, it will displace direct usage with more customised usage. For instance, instead of going to ChatGPT and asking it to write something then copying and pasting it into a word processor, people use things like Notion that use the API behind the scenes.


This is a very good point. There are many other ways to use OpenAI GPT including Bing Chat, Perplexity, Phind, You.com, to name just a few. Some of these services focus on the coding use case as well and may be more fit for purpose for that use case.


Curious to see how API usage compares. I imagine those are the most durable / high value use-cases for openAI as a business


Anyone have data on that?


Wasnt it maybe OpenAIs switch to GPT3.5 and locking GPT4? I remember not realizing for a while and thinking, damn this thing is getting more and more stupid, i give it 3 prompts, and it forgets 1, then i correct it on that 1 and it forgets the other 2.

Generally, people can tell the difference after a while and it looses its wow factor, me thinks.


I'm very skeptical about the origins of this dataset. What is Datos and where do they get their data from?


Exactly, just a profile of people who would opt in to this kind of thing god knows through what kind of dodgy wrapper, it's like inferring statistics about frontend framework popularity from question in second step after checkout form on some ad on sourceforge.

Chat is just the beginning, there are infinite streams of money generators, probably first interesting dozen will come out from things like githubnext.com


$20/month is too much for a tool I don't use that heavily.

From the comments here I ended up finding MacGPT: https://www.macgpt.com

With it I can use the Open AI API that costs about half a cent per message to use instead of the web UI.


Seems natural as more people figure out what it’s good and not good at.

It’s not the magic oracle that AI influencers want it to be, but it’s also a useful, sometimes very useful, tool.


I dont think user sessions matters as much. The market value of chatgpt is going to be a product of users * how often they use it * how much value it provides.

If they lose all the people trying to write poetry and retain all the expensive engineers leveraging it to save $100 an hour times 100s of hours, then they'll be able to capture a slice of a fat market.


Sessions are down but they are apparently raking in the cash.

https://the-decoder.com/openai-shatters-revenue-expectations...


Raking in revenue, but they are burning cash like crazy.


Session time is a real time metric but revenue updates are quarterly, so the revenue will lag behind user engagement metrics. That and/or enterprise customers signing up in droves which drives revenue up but doesn't guarantee an increase in user engagement.


Hm this is at the bottom of the second page when a minute ago it was at the top of the first page? Doesn't seem particularly contentious or anything.


Enterprises started to provide their own private ChatGPT by OpenAI service, it's 3.5Turbo but quite good.


curious why this was downvoted, is it incorrect in some way? anecdotally my org is doing just this however I am not sure if that does or does not count against whatever metrics were reported in the OP (it mentions traffic to that site in particular).


Have no time. Rework this. This is flawed in many ways.


And API usage?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: