Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are personalized social media feeds, so why not have personalized LLMs that align with how people want their LLM to act.


In a hypothetical world where people have, train and control their own LLMs according to their own needs it might be nice, but I fear that since the most common and advanced LLMs are controlled by a small number of people they won't be willing to give that much power to individuals because it will endanger their ability to manipulate those LLMs in order to push their own agendas and increase their own profits.


Because that would only reinforce the already problematic bubbles where people only see what feeds their opinions, often to disastrous results (cf. the various epidemics and deaths due to anti-vaxxers or even worse, downright genocides).


People have done this on their own behalf since the dawn of time, so it's not really clear to me why it's so often framed as an AI issue.


The core underlying issue isn't due to LLMs but they greatly exacerbate it. So does the current form of social media.

People used to live in bubbles, sure, but when that bubble was the entire local community, required human interaction, and radio had yet to be invented the implications were vastly different.

I'm optimistic that carefully crafted algorithms could send things back in the other direction but that isn't how you make money so seemingly no one is making a serious effort.


My point is: the exact same was said of every form of mass communication. Every new form was said to be a herald of the end times, and yet here we are, in many ways stronger than ever.

Im not arguing one way or another, I'm just pointing out a potential fatigue. It's difficult to see how this technology is relatively any more transformative than any of the others.


Sure, every time mass communication got "stronger" various social issues were exacerbated. So naturally people complained about that. The same thing is happening here. A new technology is exacerbating some preexisting problems and people are complaining as a result.

> Every new form was said to be a herald of the end times,

The two world wars and surrounding economic upheaval arguably came close to that in many ways. "We somehow managed to survive previous technological advances" is hardly a convincing argument that we need not worry about the implications of a new technology.

> and yet here we are, in many ways stronger than ever.

The implication doesn't follow. You haven't explained how you would differentiate a system that had plenty of safety margin left from one that was on the brink of collapse. Without that distinction the statement is no more than hand waving.

> Im not arguing one way or another

You certainly seem to be taking a stance of "nothing to see here, this is business as usual, these recent developments pose no cause for concern".

> It's difficult to see how this technology is relatively any more transformative than any of the others.

It's difficult for you to see how computers being able to speak natural language on par with an undergrad is more transformative than long distance communication? You can't be serious. Prior to this you could only converse with another human.


>The two world wars and surrounding economic upheaval arguably came close to that in many ways. "We somehow managed to survive previous technological advances" is hardly a convincing argument that we need not worry about the implications of a new technology.

I don't disagree with your rebuttal, but if the idea that "we survived so we don't have to worry" is invalid, than the idea "if we don't do something we don't survive" is equally invalid. I don't pretend to have the answer either way.

> The implication doesn't follow. You haven't explained how you would differentiate a system that had plenty of safety margin left from one that was on the brink of collapse. Without that distinction the statement is no more than hand waving.

My point is to those experiencing the revolution in real-time they had no ability to estimate the impact or understand there were any margins, and we very well may be in that position too.

> You certainly seem to be taking a stance of "nothing to see here, this is business as usual, these recent developments pose no cause for concern".

Respectfully, I am absolutely not taking any such position. I don't appreciate the straw man, and won't bother to address it.

> It's difficult for you to see how computers being able to speak natural language on par with an undergrad is more transformative than long distance communication? You can't be serious. Prior to this you could only converse with another human.

The first principles are the same: they're all "radical" technologies which were as of a decade or two prior, utterly unfathomable. I could generalize your last statement to "Prior to <revolutionary technology> you could only <do a fraction of what's possible with the technology>."

My point is making value judgements about which is _more_ impactful is difficult to see from the ground floor. It's too early to tell; At the time it's occurring, each innovation may as well have been magic, and magic is impossible to understand, and scary.

----

We've entirely diverged from the original issue I was trying to make, which was that people have actively put themselves in bubbles that confirm their own bias since the dawn of time. I'm not looking to change your mind on AI, so I can call this exchange complete from my end. Thanks for sharing your thoughts.


Quantity has a quality all its own.


Cost. It takes a lot of computational cost to train or retrain LLM, currently.


You would still have a single model, but like internet search, it would take in both a user vector and a query (prompt).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: