Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Has your technical blog seen a drop in traffic since Chat GPT released?
4 points by boristsr on March 25, 2023 | hide | past | favorite | 3 comments
I’ve noticed a fairly clear drop in traffic to my most popular articles related to Jenkins since ChatGPT launched. It has made me question whether there is any point writing a blog anymore if LLMs will just consume it and feed it to users in its own words. Is there a point in writing for robots?

I’m curious if others have noticed this, and if it’s a wider trend.

If this is a wider trend, how will this affect the proliferation of new tech information, guides and tutorials? Will this affect the future of LLMs if there is less source data to train on?




I’m torn on the “writing for robots.” On one hand you’re writing a blog for the internet and people/robots are free to do what they want with that info.

But on the other hand, I’ve written technical training documents on niche topics. A lot of work goes into them. So I sympathize that one would want some reward for it. And ChatGPT probably negates this reward. In the case of a blog the reward is traffic, conversation, notoriety in the community.

The best way for this to be fixed (in my opinion) is that if asked GPT should tell you where it got its info and give credit where it is due.

That or something like this:

User-agent: openai

Disallow: /


> I’m torn on the “writing for robots.” On one hand you’re writing a blog for the internet and people/robots are free to do what they want with that info.

Yep. This is true. I wasn’t meaning it in the sense of blocking robots. More of a practical sense, if traffic continues falling and the only traffic you see is robots, in effect you are writing for robots. And that’s when the rewards you outlined no longer apply.

I hope that Bing and ChatGPT can further the work to cite sources. That will go some way towards helping the situation I believe.


I noticed it's affected my thinking about online community participation far more than my thinking about blogging.

In tech community settings, I've been told that some of my comments must be an AI at least 5-10x by now. For various reasons, some more understandable than others (though editing any given comment to sound more like non-AI is generally off the table).

What's interesting to me is that this is now happening with comments that used to be what I call "helveticomments" (ctrl-f in profile). Specialized or specific or novel information that's not widely known, now seems to be given additional scrutiny--as if it's created out of whole cloth.

On its own, that kind of curiosity is not a bad thing at all. But I noticed that it's curiosity about something other than the topic at hand, at that point. Et voila, such circumstances are able to beautifully redirect thinking from focused and open, to diffuse and negative. (I wonder, does thinking about whether content may come from an AI affect learning outcomes? Seems like this could be measured and tested...)

My reaction is that it's not worth replying to such commentary with more information or clarification, because for one, this is generally a no-no kind of rule if you have analyzed the de facto rules of commentary, and for another, when I have replied in the past, the knowledge or info is still measured against an "is this common knowledge" bell curve rather than even just "does this make sense" or "is this interesting".

This combined with the fact that I have more control over the headers, noai meta tags, content retention, other policies and practices, etc. on my blog means that while I may be less likely to post really good stuff there sometimes, I'm guessing I'll be far less likely to share the really good stuff in tech community settings than in my blog.

So I really wonder if the future of "AI sucking" will just look like tons of bland, ad-integrated AI-generalism materials taking the place of the current ad-riddled SEO sites.

In such a setting there will be a general feeling of FOMO as it relates to fresh, new, direct, specific, and even counter-rational knowledge. It will be more important to find sources that can give this qualitative feel, for starters.

I've also thought about some other ways that tutorial sites and other tech sites can really benefit from this trend. I can see some pretty huge opportunities given specific moves to be more flexible in working with, and against, LLMs in different ways at the same time, as a content creator. IMO it will help to look at one's content as a network in format. It will also help to identify new ways to serve a more or less reliable signal as to responsible AI usage, and the nature of the usage.

This kind of thinking, to me, is where the next batch of content creation books and tutorials will start to focus.

LLMs will likely also be limited by aspects like persistent prompt-hackers causing internal policies to turn the dial more toward bland help, eventual government regulation (all over the planet) limiting what can be said and how, consumers finding some possibly mega-harmful new equivalents of "Google Maps made me drive into a lake", etc.

(BTW, remember j0hnny? Look at the dates of related publications for an idea of how that relevance trend played out, while many of the the mind-blowing aspects of Google search were gradually moderated away [1])

Anyway, just some thoughts on the matter...good q.

1. https://en.m.wikipedia.org/wiki/Johnny_Long




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: