Hacker News new | past | comments | ask | show | jobs | submit login

Oh dear lord .... sub heading states - Storm - Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models

Good luck with this storm, wiki's the world over. Just a thought but ... maybe someone should ask an org like the Internet Archive to snap-shot Wikipedia asap and label it Pre-Storm and After-Storm




LLM mediocrity is just a reflection of human mediocrity, and my bet is on the average LLM to get way better much faster than the average human doing the same.


Agree with you, but on mediocrity: Mistral barely passes as usable, GPT-4 is barely better than Googling, and nothing else I've tried is even ready for production. So there's some element of the model's design, weights/embeddings, and training data that matters a lot.

Only fine-tuned models are producing impressive work, because when we say something is impressive it by definition means not like the status quo - the model must be tuned toward some bias or other, whether it's aesthetic or otherwise, in order to stand out from the rest. And generic models like GPT or Stable Diffusion will always be generic, they won't have a bias toward certain truths - they'll be mostly unbiased which we want for general research or internet search.

So it's interesting, in order to get incredible quality of work out of AI, you have to make it specific, but in order to that, you have to train it on the work of humans. I think for this reason AI will always be ultimately behind humans, though it of course will displace a lot of work we do, which is significant.


Humans are limited in the volume of garbage they can produce.


there is this sentiment of Ai induced deterioration and pollution.

what if that is not the case? what if the quality of this type of content actually increases?


On the one hand, a tool is as good or bad as the person wielding it. Smart folks with the right intentions will certainly be able to use this stuff to increase the rate and quality of their output (because they're smart, so they'll verify rather than trust. Hopefully.)

On the other, moderation is an unsolved problem. The general mess of the internet is probably not quite ready to be handed a footgun of this caliber.

As with many things tech, some of the outcome falls to us, the techies. We can build systems to help steer this.


> On the one hand, a tool is as good or bad as the person wielding it.

I think the real reason is one line dogmas like this.


I'm not sure I follow you - reason for what?

To be clear - I'm with you that these systems can absolutely be a force for vast good (at least, I think that was what you were getting at unless there was a missing '/s'). I use them daily to pretty astounding effect.

I'll admit to being a little put off by being labeled dogmatic - it's not something I consider myself to be.


it was a half sentence, for that I apologize. and I don't remember entirely what I meant.

However, I do see a lot of one-sentence "truthms" being thrown around. like "garbage in; garbage out" and the likest.

these are not correct. we can just look at the current state of the art with LLMs that has vast amounts of garbage going in - it seems like the value is in the vastness of the data over the quality.

> On the one hand, a tool is as good or bad as the person wielding it.

I see this as being a dogme. smart people make good LLMs dumb people do not. but this is an open question. it seems like the biggest wallet will be the winner of the LLM game.

please correct me if I misunderstood something.


Ah, I see. What I meant by that was, "A tool is as good or evil as the person wielding it".

There are most definitely good and bad tools, in terms of more or less effective. Machine learning models are for sure outclassing a whole swath of tools in a number of domains and will more than likely continue to overtake purposes over time.

Whether this is a good thing for society is what I thought we were questioning - which is what I meant by steering. We can build tooling to do things like establish veracity, enable interrogation of models, and provide reasoning about internals (which we should do).

Open sourcing as much of this effort as possible will further lead to Good Things (because why are people working on these for free if they're not creating something of actual use) - whilst leaving all ML development to large corporations will inevitably ensure that the only thing you can trust an ML model to do would be spy on you and try to get you to buy stuff, because money.

God the grammar in that last sentence was terrible. But I think you get the point.


It will for a while, I imagine. But the long-term is a concern. Where will new information come from, exactly?


why not Ai?

And even if we accept the premise (as flawed as it might be) that Ai is not able to create original knowledge, most of what's online is dessimination and dies not represent new information but just old information rewritten to be understandable by a certain segment.

something LLMs excel at.


> Ai is not able to create original knowledge

The current state of LLMs do hallucinate though. It's just not a very trustworthy source of facts.


just like my first teachers said I should absolutely not use Wikipedia.

LLMs was popularized less than 2 years ago.

I think it is safe to assume that it will be as trustworthy as you see Wikipedia today, and probably even more as you can embed reasoning techniques into the LLMs to correct misunderstandings.

Wikipedia cannot self correct.


Wikipedia absolutely self-corrects, that's the whole point!


it does not. it's authors corrects it.

unless you see Wikipedia as the organisation and not the encyklopedia?

in that case: sigh, then everything self corrects


It is incoherent to discuss Wikipedia as some text divorced from the community and process that made it, so I'm done here.


There's an important difference between wikipedia and the LLMs that are actually useful today.

Wikipedia is open, like completely open.

GPT is not.

Unless we manage to crack the distributed training / incremental improvement barriers, LLMs are a lot more likely to follow the Google path (that is, start awesome and gradually enshittify as capitalist concerns pollute the decision matrix) than they are the Wikipedia path (gradual improvement as more eyes and minds work to improve them).


this is super interesting!

it also carves I to the question what constituted model openness?

most people agree that just releasing weights are not enough.

but I don't think it will ever be feasible to say that reproducing model training is feasible. especially when factoring in branching and merging of models.

for me this is an open and super interesting question.


Here's what I envision (note: impossible with current state of the art)

A model that can be incrementally trained (this is the bit we're missing) hosted by a nonprofit, belonging to "we the people" (like wikipedia).

The training process could be done a little like wikipedia talk pages are now - datasets are proposed and discussed out in the open and once generally approved, trained into the model.

Because training currently involves backpropagation, this isn't possible. Hinton was working on a structure called "forward-forward" that would have overcome this (if it worked) before he decided humanity couldn't be trusted [1]. It is my hope that someone smarter than me picks up this thread of research - although in the spirit of personal responsibility I've started picking up my old math books to try and get to a point where I grok the implementation enough to experiment myself (I'm not super confident I'm gonna get there but you can't win if you don't play, right?)

It's hard to tell when (if?) we're ever going to have this - if it does happen, it'll be because a lot of people do a lot of really smart unpaid work (after seeing OpenAI do what it did, I don't have a ton of faith that even non-profit orgs have the will or the structure to pull it off. Please prove me wrong.)

1 - https://arxiv.org/abs/2212.13345


How could it? LLMs hallucinate false information. Even if hallucinations are improved, the false information they've generated is now part of the body of text they will be trained on.


I mean, putting a bullet to someone's head can extirpate a brain tumor they hadn't been alerted to before, while leaving a grateful person owing you kudos. What if?


you can always find some radical regressionist argument that is completely out of contact with anything.

congrats on that!


The concern is not just a vaguely cynical hand-wringing about how bad AI is. Feeding AIs their own output as training material is a bad thing for mathematical reasons, and feeding AIs the output of other very similar AIs is close enough for it to also be bad. The reasons are subtle and hard to describe in plain English, and I'm not enough of an expert to even try, so pardon if I don't. But given that it is hard to determine if output is from an AI, AI really does face a crisis of having a hard time coming across good training material in the future.


>Feeding AIs their own output as training material is a bad thing for mathematical reasons

Most model collapse studies explore degenerate cases to determine the potential limits of the training process of the same model. No wonder you will get terrible results if you recursively recompress a JPEG 100 times! In real world it's nowhere near that bad, because models are never trained on their output alone and always guaranteed to receive the certain amount of external data, starting from the manual dataset curation (yes, that's also fresh data in itself).

Meanwhile, synthetic datasets are entirely common. I suspect this is a non-issue that is way overblown by people misinterpreting these studies.


I suspect it's overblown today. Hopefully it'll be overblown indefinitely.

However, if AIs become as successful as Nvidia stock price implies, it could indeed become difficult to find text that is guaranteed to not be AI. It is conceivable that in 20 years it will be very difficult to generate a training set at any scale that isn't 90% already touched by AIs.

Of course, it's conceivable that in 20 years we'll have AIs that don't need the equivalent of millennia of training to come up to their full potential. The problem is much more tractable if one merely needs to produce megabytes of training data to obtain a decent understanding of English rather than many gigabytes.


can you show me a mathematical reason that cannot philosophically be applied to people also? people only being fed other people output.


I'd go with "no", because people just consuming the output of other people is a big ongoing problem. Input from the universe needs to be added in order to maintain alignment with the universe, for whichever "universe" you are considering. Without frequent reference to reality, people feeding too much on people will inevitably depart from reality.

In another context, you may know this as an "echo chamber". Not quite exactly the same concept, but very, very similar.

I do like to remind people that the AI of today and LLMs are not the whole of reality. Perhaps someday there will be AIs that are also capable of directly consulting the universe, through some sort of body they can use. But the current LLMs, which are trained on some sort of human output, need to exclude AI-generated input or they too will converge on some sort of degenerate attractor.


yep, then we are back a "vaguely cynical hand-wringing about how bad AI is."

currently we have mostly LLMs in the mix. but there are no reason that the Ai mix will not contain embodied agents thst also publish stuff in the internet. (think search and rescue bots that automatically write a report).

Now Ai is connected to reality without people in the mix.


When trying to close a rhetorical trap on someone, it is useful to first be sure they stepped in it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: