Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Have you reduced technical knowledge contributions?
2 points by confoundingfac 7 months ago | hide | past | favorite | 13 comments
Do you find yourself thinking "whats the point, some AI will slurp it up and regurgitate it without attribution" when started to type out a technical answer on a forum? Or reduced technical help and assistance Stack Overflow for this reason?



> Do you find yourself thinking “whats the point, some AI will slurp it up and regurgitate it without attribution” when started to type out a technical answer on a forum?

Not really, but, “this will get incorporated into other people’s knowledge and repeated without specific attribution if it is useful” has always been part of my baseline background assumption (not really front of mind) whenever I would share knowledge in any context, so I’m not sure why the fact that that process might involve generative AI would change the equation at all.

I suppose if “sharing knowledge” was just a tactic for seeking credit / personal brand marketing, that would be different, but...


I have felt similar to this, but the culture on this site is rather pro AI doing this, hence some of the other rude replies. I think lots of things like this are and will move off the open web where sharing help is more personal and meaningful.


no, but i've put some projects that were at one time too complex to easily handle myself back in view with the idea that I can use AI to meaningfully fill in the gaps.

like others mentioned , i've already felt as if everything I post gets thrown into something somewhere for profit for many many years on the net; the new AI thing is just another facet of that to me.


No. I was very aware of how exploitative the Google economy was more than 10 years ago. See

http://www.seobook.com/blog

maybe it just stings for me because of the way one of my businesses was failed and how mad my wife was at me in the few years it took me to pay off our HELOC.

Getting mad about generative AI like getting upset in 2024 about Martin Luther King getting shot or Japan invading Perl Harbor. AI might be a good reason to not post about assembly language programming for the C-64 or how to make web pages with Flash and Cold Fusion.

Why do you post to Reddit or Hacker News when it is helping somebody else get (or stay) rich?

The advantage of generative A.I. is that it is not so two-sided so it will be a lot more competitive, today you are getting ripped off by a company in Silicon Valley but when the cost effectiveness of LLMs improves by 100x you will also be getting ripped off by people who live in places you never heard of.


HN is different - its open access and downloadable. Reddit, as an example, sells its data and isn't scraping friendly.


HN “exploits” its community by building the street cred of Y Co and also being a venue Y Co startups can advertise for help. It doesn’t bother me or I wouldn’t be here but a certain person could say it is some rich white (and asian) dudes benefitting from it all.

As a “hacker” I feel open access to the data is “fair” but I think much less technical person might not care if he surplus is reaped by anyone with a webcrawler or by Reddit’s administration.


the points given to a comment aren't public. That information would be highly valuable for training an LLM.


That’s interesting.

I have predictive models that can predict if a headline (w/o the rest of the article and not considering the URL) will (a) get more than 10 votes and (b) if it does get more than 10 votes will the votes/comments ratio be more than 2 (which is roughly average)

The first model gets a ROC-AUC (see https://scikit-learn.org/stable/modules/generated/sklearn.me...) in the low 60’s (not good, the second model gets in the low 70’s (actually pretty good though it is a heat seeking missile for clickbait headlines) and my latest content-based recommender for RSS items gets almost 80. (I saw a paper that one system at TikTok gets about 85)

To do all that you need about 10,000 headlines and don’t get a lot of benefit from having more than 100,000. The ceilings on performance have more to do with the nature of the problem rather than my models: the same article can get submitted twice and get 0 votes one time and 200 the other time so it can never be as accurate as “is this an article about galactic astronomy?”

I had it ingest the HN comments firehose and found the amount of articles was overwhelming, my YOShInOn RSS reader now ingests the “best comments” from

https://hnrss.github.io/

together with 110 other feeds and actually I like the comments it picks out a lot. Now that the system is adding about 3000 items per day it might be able to handle a big feed like the comments firehose since now those comments are diluted with so many quality articles. For a problem like that you might want a two-score system with: (i) is it relevant? (something I like) and (ii) is it popular? (like Google’s PageRank)

I think you could make a model that compares comments in the best comments feed with other comments. I have tried formulating the problems above as regression problems where I try to predict the actual score and it does not work well because of the uncertainty problem but formulated as a classification problem for a score over a threshold it is easy to make a well-calibrated model that tells you “this article has a 20% chance of frontpaging” which is about the best anyone can do.


wow, that's all sorts of interesting.

You could also look at the commentors karma at time of posting a comment and a while after and guess at which comment got them the points.


It is hard to tell because it might take a while for a post to get comments and in the meantime the person writes more comments.

It might be more practical to add up the score of a user’s submissions and subtract that from the total to get a comments karma score and then divide that by the number of comments to get an average which at least gives you a per-user rank which would be worth something.


No, I haven’t.


No, but obviously you have. Believe me, the AI finds your contributions either obvious or wrong.


I find AI answers wrong a lot of the times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: