Hacker News new | past | comments | ask | show | jobs | submit login

Publish a blog so an LLM can use it and make money for someone already wealthy.



You laugh, but Chat-GPT "Deep Research" cites my technical blog very frequently. With attribution, no less. Within the next ~2 years, I expect that most of the people who read my blog will find it via LLM.


I remember when something similar first happened to me, where an LLM (Perplexity) cited my recent (within a few days) wikipedia edit ("transistor density").

This helped me realize how powerful the legacy of wikipedia is/will be. When I first started editing (2006), people still didn't trust open source encyclopediae. Same thing with bitcoin, just a few years later.


And never attribute anything to you, but I’m sure ChatGPT will be cited. If OpenAI campaigns that DeepSeek is stealing their IP, is DeepSeek stealing your IP or is OpenAI?


Obviously facetious, but I'd hope most people would want their blog post read by LLMs.

I suspect (hope) a lot of blog posts are written to share knowledge. In that regard having that knowledge trained in to LLMs (ideally open ones, but even closed ones) could further that goal.


What was knowledge sharing is now called "making Zuck and Sammy richer."


I pretty much hate the mindset of don't do something if it might make someone else money even if you otherwise want to do it. One of several issues I have with the non-commercial creative commons license.


I didn’t say don’t do something. I said do something and understand a modern consequence. Go ahead publish but don’t be surprised when your work isn’t attributed correctly in the future and empowers people you may not agree with.

Blogs aren’t the only form of publishing and sharing.

Maybe after the AI winter subsides it might make sense again. A world where people were encouraged to publish to share knowledge ended up being publish to support Google’s hold on the net, now evolved to publish so people can read 5% of what you wrote distilled through statistical summation.

It makes all long form content look bad, not just the bad long form content. It continues to enable the societal trend of only consuming short form content. Which in turn enables reactionary and low information behavior instead of critical thought.


> Go ahead publish but don’t be surprised when your work isn’t attributed correctly in the future and empowers people you may not agree with.

Why would I be surprised when this has already been occurring for a very long time? What exactly is the part here that is modern?


What might not be modern to you might be modern to someone less technically adept.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: