Hacker News new | past | comments | ask | show | jobs | submit | fngjdflmdflg's comments login

I don't think that is correct. Dang usually links directly to the guidelines and even quotes the exact guidelines being infringed upon sometimes. '"dang" "newsguidelines.html"' returns 20,909 results on algolia.[0] (Granted, not all of these are by Dang himself, I don't think you can search by user on algolia?) Some of the finer points relating to specific guidelines may no be directly written there, eg. what exactly is considered link bait or not etc., but I don't think there are any full blown rules not in the guidelines. I think the reason LLMs haven't been added is because it's a new problem and making a new rule to quickly that may have to change later will just cause more confusion.

[0] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


No, there are several things like this that aren't explicitly in the guidelines and aren't likely ever to be. We'd get into a very long meta thread talking about what kinds of things land in the guidelines vs. in Dan's "jurisprudence" threads; some other time, maybe.

I think it’s okay to have unwritten rules that are inferred. I am not trying to make the perfect the enemy of the good. That said, is HN best served by this status quo? Folks are genuinely arguing against the reasoning for such a rule in the first place, arguing that a rule against LLM-generated content on HN is unenforceable and so is pointless; others are likely unaware any such rule even exists in the first place; you are countering that the rule is fine, but not so fine that we add it to the guidelines.

I don’t know if this situation benefits from all of these moving parts; perhaps the finer points ought to be nailed down, considering the explicitness of the rule itself in practice.


They're not unwritten.

Let me pose this hypothetical:

A new user signs up and immediately starts using AI to write all of their comments because they read the guidelines, then had their AI read the guidelines, and both were convinced it was okay to continue doing so, and so they did. They told a second user this, and then a third, who decided to train their AI on the guidelines and upvoted posts, as well as Dan’s posts, your posts, my posts, and everyone else’s.

One day, Dan thinks that someone is using AI in a way that is somewhat questionable, but not against the guidelines. He makes a point to mention how using AI on HN shouldn’t be done like that, that they were holding it wrong, basically.

All of the AI’s trained on HN take notice of the conditions that led to that other AI getting reprimanded, and adjusted their behavior and output so as to not be caught.

If you squint, this is basically the status quo today. HN users who have read the guidelines and made good posts, who use AI to assist them writing posts, in a good faith way, will never receive any correction or directions, or link to the thread where Dan said not to post in such a way, because they will not get caught. And because they will not learn of the rule or get caught, in their mind, they will be in the right to do so, as they don’t know any better. Furthermore, they keep getting upvotes, so it’s smiles all around.

These so-called “good faith AI users” are only differentiated from “bad faith AI users” by being told not to use AI. If said users will only receive the instruction not to use AI after being caught doing so, AI users are incentivized to not get caught, not to not use AI altogether.

There are no upsides to not adding the AI rules to the guidelines. As it is, they are Schrödinger’s rules, existing in a superposition of both existing and not existing.

If you read Dan’s replies in the linked thread, he doesn’t specifically say AI is against the rules, and actually provides feedback that the AI user was using AI “almost right,” basically, implying that there is a right way to use AI on HN:

https://news.ycombinator.com/item?id=42224972

So the rule is not only not in the guidelines, even if you search for the rule, you won’t find it. I had to email Dan to get the rule in the first place. Do you see how absurd this situation is?


Can you give some examples?


>the bias of the people who organize this data.

It seems that IMD is based in Switzerland and Singapore.[0] Singapore and Switzerland hold ranks 1 and 2, respectively, in digital competitiveness.[1] Singapore has a high level of English proficiency.[2]

I don't think the mere fact that an English proficiency Index worth spending more than 10 words on exists shows that the people who organized this data are biased. English proficiency is important in business, science and programming, to name some examples. I think an argument as to why the inclusion of such a metric is biased should be given.

[0] https://en.wikipedia.org/wiki/International_Institute_for_Ma...

[1] https://www.scmp.com/news/hong-kong/hong-kong-economy/articl...

[2] https://en.wikipedia.org/wiki/EF_English_Proficiency_Index#2...


Singapore is de facto part of the Anglosphere, ergo its 'high level of English proficiency'. Singaporeans are effectively bilingual, and English is used at all levels of interaction, from the wet markets and supermarkets, to the public and private schools and universities, to the highest levels of government administration and communication. Can't get more 'Anglo' than that.

It's natural for the citizens of a country that uses English as though it were its native language to be proficient in it.


To add, I've read about Singapore's English proficiency before in their 2020 census report, which notes that 48% of households speak mainly English at home which is definitely very high and clearly shows that Singaporeans speak English.[0]

[0] https://www.singstat.gov.sg/-/media/files/publications/cop20... p. 40


It's somewhat ridiculous that Singapore is included in the English proficiency index given that is English is one of its four official languages. Not only that, but it's also the privileged official language that all children must use in school. The other official languages are widespread but not universal.

I think it makes sense to include it because not all Singaporeans are literate in English. For example 17% of Chinese Singaporeans were only literate in Chinese according to the 2020 census report.[0] Additionally, Singapore is not even highest on this list; Netherlands and Norway are slightly above it.[1] However, by the same token, the US should probably also be included due to its high level of non English-speaking immigrants.[2] It's worthwhile to note, however, that the EPI does not test a random sample of the population so the usefulness of these results is not certain.[3] This would be even more of an issue in the US were almost all EPI testers would be immigrants, so the results would look much lower than they actually are.

[0] https://www.singstat.gov.sg/-/media/files/publications/cop20... p. 42

[1] https://www.ef.edu/epi/

[2] https://www.census.gov/content/dam/Census/library/publicatio... p. 8

[3] https://news.yahoo.co.jp/expert/articles/d4a8d621480672a1c99...


Is that including people who immigrated or only those who grew up there? Your link [0] mentions "residents" throughout, which implies that it's counting Chinese people who moved to Singapore as adults. I've known a decent number of Singaporeans, including many ethnically Chinese Singaporeans I met while working in Beijing, and I've never met a single person who grew up there and can't read English.

If adult immigrants are included, then I strongly agree with the later part of your comment! There's no reason the US shouldn't similarly be included in the list as it has tens of millions of Spanish-speaking residents who immigrated as adults. Canada, Australia and New Zealand should probably be included too, if Singapore is.


I also assumed it was including mainland Chinese immigrants, especially since the rate of Chinese only is so much higher than Malay only and Tamil only. Lots of wealthy Chinese move to Singapore every year. (A bit different from immigration in the US.)

Why is Singapore even in an English language competition? It's the dominant language there.

Both Switzerland (9m) and Singapore (6m) are tiny. Even if they are at the top of everything per capita, it isn’t very meaningful given that the scale of China, the USA, and the EU, just dwarfs either countries by far.

Surprised "mind-blowing" is not in the HN clickbait filter.

It is now.


This exactly. NVN is lower level than dx12/Vulkan and is probably more comparable to Sony's graphics APIs in terms of how low level it is.* And even if the NVN version itself remained the same, consoles use precompiled shaders as you say, so unless you keep that API stable between generations as well you are going to need to do some form of translation between the new and old APIs.

* I've never used NVN but I imagine it must be very low level otherwise developers would not be using it instead of Vulkan which is also supported by the Switch. Feel free to correct me here or clarify if I'm right on this.


I hate to say this but a large percentage (in fact, I believe a majority) of gamers simply do not care about invasive anti-cheats. Right now CounterStrike players are mostly begging Valve for kernel-level anti-cheat since their current solution isn't working at all. If anything, this warning will actually make many player's more impressed with the game. That said, more consumer information is almost always better in any case, especially in this case considering that this is not a requirement of law but of a private company.


As a counter strike player, I definitely shy away from the invasive anti cheat stuff… but I’d let valve inject it into my veins if it meant I could actually play and not suspect everyone of cheating all the time. Mostly because Valve has earned my trust. I won’t install games from other companies using similarly invasive techniques though.


Valve wouldn't purposefully backdoor you for nefarious purposes. But any such code is not nearly reviewed enough to be sure it is free of unintentional backdoors that could be exploited by third parties.


While I trust valve, I'm not willing to mess up with my workstation to play.

Also, there's hardware cheats, so I don't need a rootkit on my machine, but a server side thing that properly weeds bad players out through reports/trust and automated bans.


> a server side thing that properly weeds bad players out through reports/trust and automated bans.

No. No no no.

Automated bans via the report system is very well-known to be abused.

Even if you implement a "trust" system where initially, all your reports are manually verified by game staff until its determined your reports are correct until your reports are acted on automatically, all it takes is a player to just be "good" until their trust is high enough, then start reporting people who don't actually deserve it.

And I'm not convinced that server-side anti-cheat can be effective. You have to rely entirely on heuristics. Sure, a simple aim-bot that instantly snaps someone's aim right on someone's head might be detectable, but one that simply lets you see through walls certainly won't be if the player doesn't make it stupidly obvious by pre-aiming around every corner.


Reports only work so well. Overwatch has MANY cheater in spite of vigorous reporting.


Yeah, I generally trust Valve but gaming is definitely not important enough to me to give them kernel access to my system. I’m sure many gamers disagree with me though.


But you couldn't. After all, there's a lot of hardware based cheats that even KLA can't reliably detect.

If you're "not sure if someones cheating or just good", maybe that's a mental problem with you? Put differently, if all cheaters were perfectly hidden (aka looked exactly like a real player of that skill level), would you still care? If yes, you seem more interested in a morality than actually enjoying the game.


I take it community moderation tools like voteban/votekick aren't sufficient anymore?

They worked pretty well for pub matches back in CSS and 1.6, where it was pretty trivial for anyone to cheat or bot for free with minimal effort. I wonder what changed.


In a normal 5v5 match, you need everybody else on the team to vote yes to kick the cheater. If they're queued with someone else (which is very likely) then you've got no chance


Trust in a company plays a huge role here


Yep. I would call myself a privacy focused person, but given that Windows is the primary platform for PC gaming, and I trust Microsoft about as far as I can throw a their corporate headquarters, the platform is already compromised. Treat it accordingly, play your games. Maybe watch your adult films and write your memoirs on a different system than your gaming rig.


You don't need kernel-level cheats to bypass VAC, nor kernel level anti-cheats to catch cheaters.


>nor kernel level anti-cheats to catch cheaters.

Do you have some examples of good anti-cheats that are not kernel-level? Do you have any that are as good as Riot's Vanguard? I'd prefer examples of FPS games since these are the most mechanically skill based compared to other genres that have more strategy, but would like to hear any examples you are thinking of. Lastly, if you say server-side, that may work, but many companies don't seem interested in it due to the cost, at least IIUC.


As someone that plays CS2 and Valorant regularly...

Vanguard hasn't been effective for a while now. The cheating situation is a lot worse than CS in my experience, but every discussion gets shutdown because... well... it's Vanguard.

With CS2 I have talked to many players about this and everyone says the same thing: "There's a very noticeable decline in cheaters above 10k Elo."... personally I have pushed beyond 15k and briefly above 20k Elo and the amount of cheaters have steadily declined (although less obvious cheats, eg. wallhack, are probably more common at that level) - for Valorant it has pretty much stayed at a constant amount of "cheatiness" across the ranks.

CS actually has a rich history of features, functions, services?... that aren't strictly anti-cheat...

Overwatch gave players the option to "police" others players replays - this wasn't only against cheating, but also griefing.

Prime? Is it still even a thing? It was great when CSGO went F2P... all the cheaters just annoyed the non-prime players (F2P).

The ominous Trust factor which is probably the single most effective piece in making my personal experience great. But there's no real way to tell?

Also, VacNet - which is running? is AI based? banning players? lowering their trust factor?... with Valve there's no real way to tell most of the time, but it's probably existent in some shape, way or form.

Not to say that CS2 has solved cheating, it's far from it - but neither has Valorant.


I have a very hard time believing that the rate of cheaters go down in high elo. IIRC the new CS2 leaderboard still regularly features cheat companies on it (eg. config by [cheat dev] as the leaderboard name.) I myself do not have any data to back up that claim, but yours completely goes against what I have experienced.

I think the point about wallhack being more common in higher elo is more likely. I would add that some forms of trigger botting and recoil control cheats are actually more difficult to tell than wallhacking. Spinbotters don't get very high elo because they get mass reported because of how blatant they are, likely not due to VAC. I would need some real evidence to believe that claim (although as I said I similarly have no evidence myself to convince you to accept my claim).

One thing I can say is that I do frequently meet cheaters in CS these days, and the issue has gotten so bad in my experience that many cheaters even announce at the start of the game that they eg. have wallhack. Or one team member will turn on cheats if a game is getting close towards the end of the game. Also, the main reason FACEIT exists is for its anti-cheat, and on FACEIT there are almost no reports of cheating, and it's a big deal when it happens. If VAC was really working now we would see more people leaving FACEIT. I must ask when you started playing CS? Because the only way your post makes sense to me is if you started playing around the time when CS2 came out, which indeed did have more cheaters then it does now, but that was truly an exceptional level of cheating and I don't think that is a fair point of comparison, especially as a comparison to Vanguard.

I admit to taking claims about Vanguard at face value and I've never played Valorant (in part due to Vanguard, as I don't want to install a rootkit). But what you say about Vanguard also completely goes against what I have heard about it.


I absolutely support your claims about the leaderboards, it's an obvious show of cheating in CS2. There's also a strong incentive for cheating companies to be there so it might not be descriptive of the average experience. However, I can't speak for that level as my peak was barely over 25k (top 1%?) and the leaderboards are simply orders of magnitude away from that.

Regarding cheaters announcing they're cheating - I haven't encountered that in a long time, but I have heard of it often enough from new players... so it might be an issue with trust factor, but who knows?

I have actually been playing CS off and on since around 2017 - at least in my experience the current cheating situation isn't worse than it was back then, but it's also not better. The only time it was meaningfully better was when prime released around 2021.

However, it's also true that I started playing more after the release of CS2... and the aforementioned 10k Elo mark was a real pain point for me and my friends. Every time we were due to pass it we ran into cheaters, smurfs and even a server crash once (incredible luck?). After over 3 months we made it past 10k and climbed above 15k Elo within 2 weeks. - This is my experience and I have heard similar stories from other players. (Although ranks have been massively imbalanced at that time as well, which partly explains this?)

Nevertheless, it's good to have a discussion about cheating - in CS2's case the experience can be so different depending on the region, ELO, trust factor, ... with Valorant the discussion simply gets shutdown way to often because of "Vanguard" and without a replay system you're just left to your own devices.


You don't have replay in valorant, you can't be sure if the other player was cheating or not, in CS you can


They should implement honeypots like they did with Dota 2: https://www.dota2.com/newsentry/3677788723152833273

but yeah I can agree, my friends say CS2 is full of cheaters, I have played 7-12k rating and I got only a few cheaters throughout this whole year of CS2.

and they say they keep playing Valorant because there's way less cheaters than CS2.


My question would be can't the netcode be improved to prevent this in the first place? The fact that all players receive full game state enables this. In the early 2000s this made sense. Does it still today?


That will only protect you against wall hacks. This is a strategy known as fog of war. The server will not send the positions of players far from you. However, you still need to send the positions of players near you, but still behind walls, otherwise lag compensation won't work properly.

This doesn't protect you against trigger bots (shoot automatically when you put your mouse on a target), aim bots (snap to targets, ranging from obvious hacks to very minute adjustments), and others.


> lag compensation

That's already done entirely server side. It could only be. If you mean predictive positioning from the client side you can do that with far less state than gets transmitted today, and you could factor in the other players momentum on the server side to see if prediction would even be necessary in a given frame or not.

The server could also send lots of phantom updates so the player client has no idea which objects are real and which aren't. The hacks could work around this but it would take a lot of power to do so. There's room for asymmetric counterhacks here.

As for the other types of bots those are far less useful and more detectable by naked eye without wallhacks, which ironically, is because lag compensation is server side, these hacks do not have a deterministic outcome when used.

When you look at a video of what a wallhack enables and how much state data gets transmitted that shouldn't be, I would be embarrassed to have such unworthy netcode in the 2020s. They've had 20 years and have done next to nothing.


Not to forget positional sound. Which is real part of these games and for that you need to send some information to clients.


Both CS and Valorant has had it for years, MOBA games as well. It works when you have maps with simple geometry.


Blizzard’s Warden + their legal team. While not strictly an FPS directed solution, I can play Heroes of the Storm in more places without breaking my system like Vanguard does for League of Legends.


Doesn’t matter in a world of AI powered hacks. Kernel level anti cheat isn’t detecting the yolov8 model fine tuned on the head of my enemies.


This video suggests you can catch this type of cheater without even a kernel level anti cheat:

https://youtube.com/watch?v=x-EbjGSRyKA

There’s a lot of other stuff in the video but if you skip the robot building parts at the beginning he talks about an anti cheat system he developed with another person.


Behavioral analysis (the thing he's talking about) doesn't work that well and has a hard precision limit due to the nature of online gaming. What the player sees, what the server sees, and what other players see are entirely different things. I'm not even talking about plausibly deniable things like visual sound location.

Nobody's using complicated stuff like this in practice though, as there are easier methods. But of course this path can be taken, and it's not possible to block easily.


it's not always easy to tell if it's just a player playing weirdly or a mistuned AI though. Maybe the player just have too low mouse sensitivity so it is always a little lagged, maybe it's actualy AI. There is no easy way to tell, and require manual judgement in a lot of cases.


Aimmy is still undetected everywhere except overwatch last I checked:

https://github.com/Babyhamsta/Aimmy

And it's likely that most detection systems can be trivially fooled by me ChatGPTing the code around the way mouse movements are implemented to act slightly different / use a different compiler or something to get different file hashes on the core tool.


Ideally, players would be given both a choice and a clear breakdown of what’s actually being collected or monitored.


valve has their own ethos in this topic, which i wholeheartedly accept. you can guess which end of the spectrum they lie from the original news article.

faceit for the longest time has had their own way around this. so did esea, before they ruined the trust forever (https://news.ycombinator.com/item?id=5636233). some highly-motivated players still found ways around it (https://news.ycombinator.com/item?id=39352331).


Prop 65 went great! Let's get a warning out for every game with peer to peer networking while we're at it.


I get the argument, but if that is more than a strawman argument to you, I am bewildered. Making a network connection is infinitely less problematic than having root level access to a kernel (translate to windows language for NT)


> Prop 65 went great!

The secondary effect is that business will stop using processes and chemicals which require them to carry this warning. You've effectively created a new market segment.

Are the labels annoying to the point of comedy? Sure, but it's not /your/ behavior we were trying to modify.


Seeing the warning everywhere has mostly desensitized people to it, which makes it ineffective.


Do you think a large percentage cares about cheats in general?


>please use the original title, unless it is misleading or linkbait; don't editorialize. (@dang)

On topic, I like this quote from the first page of the opinion:

>A “hash” or “hash value” is “(usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value.” United States v. Ackerman, 831 F.3d 1292, 1294 (10th Cir. 2016) (Gorsuch, J.).

It's amusing to me that they use a supreme court case as a reference for what a hash is rather than eg. a textbook. It makes sense when you consider how the court system works but it is amusing nonetheless that the courts have their own body of CS literature.

Maybe someone could publish a "CS for Judges" book that teaches as much CS as possible using only court decisions. That could actually have a real use case when you think of it. (As other commenters pointed out, the hashing definition given here could use a bit more qualification, and should at least differentiate between neural hashes and traditional ones like MD5, especially as it relates to the likeliness that "another set of data will produce the same value." Perhaps that could be an author's note in my "CS for Judges" book.)


> Maybe someone could publish a "CS for Judges" book

At last, a form of civic participation which seems both helpful and exciting to me.

That said, I am worried that lot of necessary content may not be easy to introduce with hard precedent, and direct advice or dicta might somehow (?) not be permitted in a case since it's not adversarial... A new career as a professional expert witness--even on computer topics--sounds rather dreary.


I bet that book would end up with some very strange content, like attributing the invention of all sorts of obvious things to patent trolls.


What's so weird about this? CS literature is not legally binding in any way. Of course a judge would rather quote a previous ruling by fellow judge than a textbook, Wikipedia, or similar sources.


I think the operative word was "amusing"--which it is--but even then there's a difference between:

1. That's weird and represents an operational error that breaks the rules.

2. That's weird and represents a potential deficiency in how the system or rules have been made.

I don't think anyone is suggesting #1, and #2 is a lot more defensible.


They didn't say it was weird.


From what I understand, a judge is free to decide matters of fact on his own, which could include from a textbook. Also, it is not clear that matters of fact decided by the Supreme Court are binding to lower courts. Additionally, facts and even meanings of words themselves can change, which makes previous findings of fact no longer applicable. That's actually true in this case as well. "Hash" as used in the context of images generally meant something like an MD5 hash (which itself is now more prone to collisions than before). The "hash" in the Google case appears to be a perceptual hash, which I don't think was as commonly used until recently (I could be wrong here). So whatever findings of fact were made by the Supereme Court about how reliable a hash is is not necessarily relevant to begin with. Looking at this specific case, here is the full quote from United States v. Ackerman:

>How does AOL's screening system work? It relies on hash value matching. A hash value is (usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value. Some consider a hash value as a sort of digital fingerprint. See Richard P. Salgado, Fourth Amendment Search and the Power of the Hash, 119 Harv. L. Rev. F. 38, 38-40 (2005). AOL's automated filter works by identifying the hash values of images attached to emails sent through its mail servers.[0]

I don't have access to this issue of Harvard Law Review but looking at the first page, it says:

>Hash algorithms are used to confirm that when a copy of data is made, the original is unaltered and the copy is identical, bit-for-bit.[1]

This is clearly referring to a cryptographic hash like MD5, not a perceptual hash/neural hash as in Google. So the actual source here is not necessarily dealing with the same matters of fact as the source of the quote here (although there could be valid comparisons between them).

All this said, judges feel more confident in citing a Supreme Court case than a textbook because 1. it is easier to understand for them 2. the matter of fact is then already tied to a legal matter, instead of the judge having to make that leap himself and also 3. judges are more likely to read relevant case law to begin with since they will read it to find precedent in matters of law – which are binding to lower courts. This is why a "CS for Judges" could be a useful reference book.

Lastly, I should have looked a bit more closely at the quoted case. This is actually not a supreme court case at all. Gorsuch was nominated in 2017 and this case is from 2016.

[0] https://casetext.com/case/united-states-v-ackerman-12

[1] https://heinonline.org/HOL/LandingPage?handle=hein.journals/...


>The only thing here that would be the equivalent would be the value of the silver

Not necessarily. A lot more silver has been mined since the Norman Conquest (for example in the New World, but also in the Old World) which increases its supply. The demand for and utility of silver in general has also changed since then.


Do you have any examples?


Putting the organisation at risk by playing chicken with large publishing corporations. Trying to stretch fair use a little too far so they had to go to court.


[flagged]


I don't believe IA itself takes down pages that kiwifarms archives/links to. Rather they get a request to take it down and comply with it (correct me if I'm wrong here). I think IA is actually in a tough spot on this issue because they might be able to be sued eg. for defamation if they don't take down pages with personal info after a request to do so is made. Lastly, I doubt any new leadership would be less harsh on kiwifarms.


There was no illegal content on kiwi farms. Even then, I’d say taking down a single page by request is understandable. However, they surrendered to the mob and chose to stop archiving the entire site. This was to censor any criticism of the people involved, but as a result, we lost all of the other information on the rest of the site as well. It’s clear this organization cannot handle pressure, and is relying on people treating it kindly.


They chose to stop serving archives of a site that had started explicitly using tham as a distribution mechanism to get around much a much broader attempt to censor them.

I'm curious what other information on that site you think was valuable to have available to the general public? Nothing has been lost in terms of historical data, it's only the immediate disemmination that has been slowed.

I'm really trying to understand why I should disagree with the IA's choice here. The IA is an archival service, not a distribution platform and it is not their job to help you distribute content that other people find objectionable. Their job is to make and keep an archive of internet content so that we don't lose the historical record. Blocking unrestricted public access to some of that content doesn't harm that mission and can even support it.


the funny thing about the internet archive is that anyone else on this planet could do exactly what they are doing, but they consistently choose not to.

kiwifarms could spin up their own infrastructure, serve their own content for the world, but it turns out technology is a social problem more than a technical problem.

anyone that wants to stand up and be the digital backbone of “kiwi farms” can, but only the internet archive gets flack for not volunteering to be the literal kiwi farm.

for example, the pirate bay goes offline all the time, but it turns out the people that use it, care enough to keep it online themselves.


That's something I completely support. There's a limit and that site crosses it.


>Meissonic, with just 1B parameters, offers comparable or superior 1024×1024 high-resolution, aesthetically pleasing images while being able to run on consumer-grade GPUs with only 8GB VRAM without the need for any additional model optimizations. Moreover, Meissonic effortlessly generates images with solid-color backgrounds, a feature that usually demands model fine-tuning or noise offset adjustments in diffusion models.

This looks really cool. Also nice to see another architecture being used for image generation besides diffusion. It seems like every NLP problem can be solved with transformers now: text generation/understanding, image generation/understanding, translation, OCR. Perhaps llama 4/5 will have image generation as well. eidt: llama 3.2 already has image editing, they probably just don't want to release an image generator for other reasons.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: