Hacker Newsnew | past | comments | ask | show | jobs | submit | more nialv7's commentslogin

Hear me out. I was thinking jokingly to myself, "for how bad these models are at recognizing five legged dogs, they sure are great at generating them!"

But then it hit me, could this actually be why this is? Diffusion models work by iteratively improving a noisy image. So if it couldn't recognize there is something wrong with the image, it can't fix it.


I agree. If it doesn't know the abnormality then how can it control its output


if using algorithm to promote some information and suppress some other isn't censorship to you, then honestly your definition of censorship is narrow to the point of being useless.


Not promoting something is different than suppressing it.

Censorship is active suppression.

If Google was using AI to prevent independent people from accessing independent websites that would be censorship.

Censorship is something that is done not simply the lack of something being done.


They literally change the algo to exclude smaller sites. That's active suppression. Promoting would be putting them on top of the "neutral" search results like they do for ads.


By that definition is OpenAI also censoring small websites? What about Washington Post, is it censoring small websites? Because I sure can't see them in any of those places.


One of my favorite insights is that the existence of undecidable problems is the same thing as the uncountability of real numbers.

Too bad the author didn't get into it.


I had exactly the same reaction, which prompted me to write this comment: https://news.ycombinator.com/item?id=44122045


...which is the same thing as Rice's theorem, and many other mind-bending results. It's all diagonalization under the hood =)


all DVCSs are ultimately the same thing, they are all just providing a human interface to the same data structure. i am not saying this dismissively - human interfacing is an extremely difficult problem where a lot of breakthroughs can be made (and some of these breakthroughs will involve interesting algorithms, see pijul).

so to answer your question, jj allows you to do the same set of things you can do with git, but with (arguably) much better UI/UX.


well in this context it generally means the age someone starts to receive state pension.


And in many countries it defines the age you can withdraw money from your own retirement savings without paying full income tax.


Yes. But the wider meaning is - you are in control


But you're not because the country decides how much they take out of your wage for pension and at what age you can access it.


maybe we ask the AI to come up with an exploit, run it and see if it works? then you can RL on this.


Switch has (certified) OpenGL and Vulkan. Mac has none of them.


> It can infer why something may have been written in a particular way, but it (currently) does not have access to the actual/point-in-time reasoning the way an actual engineer/maintainer would.

Is that really true? A human programmer has hidden states, i.e. what is going on in their head cannot be fully recovered by just looking at the output. And that's why "Software evolves more rapidly under the maintenance of its original creator, and in proportion to how recently it was written", as is astutely observed by the author.

But transformer based LLMs do not have this hidden state. If you retain the text log of your conservation with an LLM, you can reproduce its inner layer outputs exactly. In that regard, an LLM is actually much better than humans.


The internal state and accompanying transcripts of an LLM isn't really comparable to the internal state of a human developer.


Work will expand to fill the time available.

(I know this is not the commonly accepted meaning of Parkinson's law.)


the mispronunciation of 行 and 行 in the Chinese sample is killing me too XD


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: