Hear me out. I was thinking jokingly to myself, "for how bad these models are at recognizing five legged dogs, they sure are great at generating them!"
But then it hit me, could this actually be why this is? Diffusion models work by iteratively improving a noisy image. So if it couldn't recognize there is something wrong with the image, it can't fix it.
if using algorithm to promote some information and suppress some other isn't censorship to you, then honestly your definition of censorship is narrow to the point of being useless.
They literally change the algo to exclude smaller sites. That's active suppression. Promoting would be putting them on top of the "neutral" search results like they do for ads.
By that definition is OpenAI also censoring small websites? What about Washington Post, is it censoring small websites? Because I sure can't see them in any of those places.
all DVCSs are ultimately the same thing, they are all just providing a human interface to the same data structure. i am not saying this dismissively - human interfacing is an extremely difficult problem where a lot of breakthroughs can be made (and some of these breakthroughs will involve interesting algorithms, see pijul).
so to answer your question, jj allows you to do the same set of things you can do with git, but with (arguably) much better UI/UX.
> It can infer why something may have been written in a particular way, but it (currently) does not have access to the actual/point-in-time reasoning the way an actual engineer/maintainer would.
Is that really true? A human programmer has hidden states, i.e. what is going on in their head cannot be fully recovered by just looking at the output. And that's why "Software evolves more rapidly under the maintenance of its original creator, and in proportion to how recently it was written", as is astutely observed by the author.
But transformer based LLMs do not have this hidden state. If you retain the text log of your conservation with an LLM, you can reproduce its inner layer outputs exactly. In that regard, an LLM is actually much better than humans.
But then it hit me, could this actually be why this is? Diffusion models work by iteratively improving a noisy image. So if it couldn't recognize there is something wrong with the image, it can't fix it.