Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imagine you have a calculator that outputs a result that is off by one percent. That's ai right now.

If you use the results of each calculation in additional calculations, the result will skew further and further from reality with each error. That's ai training on itself.



In many areas of communication and information, this exact problem is dealt with through error correction codes. Do AI models have built in ECC?


No, LLMs with soft attention use compression, and actually has no mechanism for ground truth.

They are simply pattern finding and matching.

More correctly, they are uniform consent depth threshold circuits.

Basically parallel operations on a polynomial number of AND, OR, NOT, and majority gates.

The majority gates can do the Parity function, but cannot self correct like ECC does.

The thing with majority gates is that they can show some input is in the language:

This the truthiness of 1,1,1,0,0 being true, but 1,1,0,0,0 would be failure as negation, but doesn't prove that negation, it isn't a truthy false.

With soft attention will majority gates they can do parity detection but not correction.

Hopefully someone can correct this if I am wrong.

Specifically I think that the upper bound of deciding whether X = x is a cause of m) in structures is NP-complete in binary models (where all variables can take on only two values) and Σ_2^P -complete in general models.

As TC_0 is smaller than NP, and probably smaller than P, any methods would be opportunistic at best.

Preserving the long tail of a distribution is a far more pragmatic direction as an ECC type ability is unreasonable.

Thinking of correctional codes as serial turing machine and transformers as primarily parallel circuits should help with understanding why they are very different.


The trouble is "truth" and math are different.

You can verify a mathematical result. You can run the calculations a second time on a separate calculator (in fact some computers do this) to verify the result, or use a built in check like ecc.

There's no such mathematical test for truth for an ai to run.


Error correction doesn’t insure truth. At least in communication, it insures that the final version matches the original version.

For AI, you wouldn’t be doing EC to make sure the AI was saying truth, you would be doing EC to ensure that the AI hasn’t drifted due to the 1% error rate.

Of course I have no idea how to actually do it - if it isn’t being done now, it is probably hard or impossible.


There's no fully general test for truth for an AI to run.

In some specific domains such tests exist — and the result is, generally, computers wildly outperforming humans. But I get the impression from using them that current LLMs didn't take full advantage of this during training.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: