Hacker Newsnew | past | comments | ask | show | jobs | submit | shit_game's commentslogin

This issue (human attestation) is the product of these AI companies. They are poisoning the well, only to sell the cure. This may not have initially been the plan of many of these companies, but it is the eventual end goal of all of them. Very similar to war profiteers, selling both the problem and the solution simultaneously has yet to be illegalized, but has long been masterfully capitalized, and will be vigourously because nobody will stop it.

Years ago (around 2020, when GPT-2 and 3 became publicly available) I noticed and was incredibly critical of how prevalent LLM-generated content was on reddit. I was permanently banned for "abusing reports" for reporting AI-generated comments as spam. Before that, I had posted about how I believed that the the fight against bots was over because the uncanny valley of text generation had been crossed; prior to the public availability of LLMs, most spam/bot comments were either shotgunned scripts that are easily blockable by the most rudimentary of spam filters, generated gibberish created by markov chains, or simply old scraped comments being reposted. The landscape of bot operation at the time largely relied on gaming human interaction, which required carefuly gaming temporal-relevance of text content, coherence of text content (in relation to comment chains), and the most basic attempt at appearing to be organic.

After LLMs became publicly available, text content that was temporally, contextually, and coherently relevant could be generated instantly for free. This removed practically every non-platform-imposed friction for a bot to be successful on reddit (and to generalize, anywhere that people interact). Now the onus of determining what is and isn't organic interaction is squarely on the platform, which is a difficult problem because now bot operators have had much of their work freed up, and can solely focus on gaming platform heuristics instead of also having to game human perception.

This is where AI companies come in to monetize the disaster they have created; by offering fingerprinting services for content they generate, detection services for content made by themselves and others, and estimations of human authenticity for content of any form. All while they continue to sell their services that contradict these objectives, and after having stolen literally everything that has ever been on the internet to accomplish this.

These people are evil. Not these companies - they are legal constructions that don't think or feel or act. These people are evil.


>It used to feel fun but now it's miserable.

It's not their job to entertain you.


'Delight the customer' is a basic tenet of business. A business that wants repeat customers, that is.

The issue with creating some hidden maturity heuristic for accounts is that it will be gamed just the same as any other, except that using age alone is the simplest heuristic to game. You can simply do nothing for incrimental periods of time and then begin testing aged accounts to roughly determine what the minimum age an account must reach to become "trusted".

Bot prevention is a very difficult constant game of cat and mouse, and a lot of bot operators have become very skilled at determining the hidden metrics used by platforms to bless accounts; that's their job, after all. I've become a big fan of lobste.rs' invitation tree approach, where the reputation of new accounts rides on the reputation of older accounts, and risks consequence up the chain. It also creates a very useful graph of account origin, allowing for scorched earth approaches to moderation that would otherwise require a serious (and often one-off) machine learning approach to connect accounts.


> Not adding the domain to Google Search Console immediately. I don't need their analytics and wasn't really planning on having any content on the domain, so I thought, why bother? Big, big mistake.

I'm not particularly familiar with SEO or the massive black box that is Google Search - is this really as critical as the author makes it seem? I have both .lol and .party domains, both through porkbun (and the TLDs seem to be administrated by Uniregistry and Famous Four Media, respectively), and both are able to be found on Google Search. It seems like this preemtive blacklisting would be the result of some heuristics on Google's end; is .online just one of the "cursed" TLDs like .tk?


> is this really as critical as the author makes it seem?

It is critical in the sense that if you want to appeal the decision in a case like this, it will go much better if you pre-verified that you own the domain.

(I don't think it has much effect on google search placement at all)


Yeah I'm guessing the TLD was the main signal, based on other comments linking to a thread about "Pinggy", who was also using a .online. The fact that Namecheap is giving them out for free means they probably are more scammy on average.

I've also never added domains to Google Search Console and haven't had blacklisting issue other than with a free .ml (another "cursed" TLD) site that was by default assumed to be spam by Facebook Messenger.

It's unfortunate that this category exists, but I don't share the OP's .com purism; I've used a mix of TLDs and even the cheap ones like .fyi and .cc haven't come under extra scrutiny as far as I can tell.


Similarly to pixel sorting, effects like these fall under New Aesthetic[0], which primarily communicates "digitalness" in some avstract sense. It's cool, but definitely not Glitch. Emulation of some intuitive perception of "this is something computeresque and it's broken or acting how it shouldn't" has tons of applications in media, particularly in commercial creative workflows where actually getting down and dirty with file formats or hardware is either cost or application prohibitive, but will often draw strong criticism from glitch artists. There is a philosophical parallel between this "hard glitch" versus "glitch aesthetic" constrast to criticisms of AI generated images and manmade art, largely centered around the ethos of the work. There's also the undeniable differences in the compositions of hard glitches and new aesthetic media - most hard glitches are ugly, as they're not generally designed to be visually appealing or communicative in the way that new aesthetic deliberately is. The deliberate composition and curation (or complete lack thereof) of hard glitch elements that makes up much glitch art is arguably just as important to its ethos as the hard glitches themselves, IMO.

[0] https://en.wikipedia.org/wiki/New_Aesthetic


Lots of the AIisms with letters remind me of tom7's SIGBOVIC video Uppestcase and Lowestcase Letters [advances in derp learning]

https://www.youtube.com/watch?v=HLRdruqQfRk


This is the result of the long-planned desire for consumer computing to be subscription computing. Ultimately, there is only so much that can be done in software to "encourage" (read: coerce) vendor-locked, always-online, account-based computer usage; there are viable options for people to escape these ecosystems via the ever growing plethora of web-based productivity software and linux distributions which are genuinely good, user friendly enough, and 100% daily-drivable, but these software options require hardware.

It's no coincidence that Microsoft decided to take such a massive stake in OpenAI - leveraging the opportunity to get in on a new front for vendor locking by force-multiplying their own market share by inserting it into everything they provide is an obvious choice, but also leveraging the insane amount of capital being thrown into the cesspit that is AI to make consumer hardware unaffordable (and eventually unusable due to remote attestation schemes) further enforces their position. OEM computers that meet the hardware requirements of their locked OS and software suite being the only computers that are a) affordable and b) "trusted" is the end goal.

I don't want to throw around buzzwords or be doomeristic, but this is digital corporatism in its endgame. Playing markets to price out every consumer globally for essential hardware is evil and something that a just world would punish relentlessly and swiftly, yet there aren't even crickets. This is happening unopposed.


What can we do? Serious question.

It's so hard to grasp as a problem for the lay person until it's too late.


I guess we can support open hardware projects like RISC-V, and homegrown chips. DIY chips will be expensive and very limited at first, so hopefully hobbysts will prioritize efficiency while they get better.

Fortunately we won't ever see a shortage of monitors and input devices, because then how would we consume the rent-a-remote-desktop services.


Honestly; I don't know. I don't think there really is a viable solution that preserves consumer computation. Most of the young people I know don't really know or care about computers. Actually, most people at large that I know don't know or care about computers. They're devices that play videos, access web storesfronts, run browsers, do email, save pictures, and play games for them. Mobile phones are an even worse wasteland of "I don't know and I don't care". The average person doesn't give a shit about this being a problem. Coupled with the capital interests of making computing a subscription-only activity (leading to market activity that prices out consumers and lobbying actions that illegalize it), this spells out a very dire, terrible future for the world where computers require government and corporate permission to operate on the internet, and potentially in ones home.

Things are bad and I don't know what can be done about it because the balance of power and influence is so lopsided in favor of parties who want to do bad.


Presumably the answer is the same as nearly every real problem we face today: organize. Yes, it will be tough to organize around this problem specifically, but imagine a truly muscular working class movement like once existed in the early 20th century in many places: they raised armies, published their own newspapers, ran radio stations, started universities, even ran cooperative factories, all under the active opposition of capital. Surely a modern version of such a movement would recognize the need for secure, trustable, affordable, ad-free computing devices and invest accordingly.

It will take decades to build this power, just like it did then, but the alternative (which we are witnessing in slow motion in the meantime) is too grim to let stand.


They certainly didn't have mills as we know them in the 1700s, but lathes, drills, and subtractive manufacturing had been in practice for millenia. You could say they were "machined by hand". Most early firearms (barring large-bore guns like cannons) were made from forged steel or iron, which is significantly stronger than cast iron due to its lower carbon content and regular grain structure. These forged parts were then worked on by gunspiths with cutters and abrasives to produce parts in tolerance for their mechanism. Cast iron (or more typically in early warfare, bronze) was suitable for cannons and large-bore guns due to the mass of the finished gun; more metal meant that the gun could withstand more shock, but even then they could fail catastrophically due to material fatigue or failure.


Well, the kind of guns politicians are afraid people will make at home are not intended for durability. But things like street crime, school shootings,etc.. where it's just a one and done affair.


Complex manufacturing of improvised firearms has been practically made obsolete by the commodification of both steel tubing and cartridges. "Pipe guns" are incredibly easy to make, and require little more than a pipe, a cap, and a drill (which can sometimes be omitted as well). Many common cartridge diameters very closely or exactly match commercially available pipe diameters, and the hardware to make a single-shot firearm is ubiquitous in any store that sells plumbing supplies. Pipe guns are simple and cheap enough to make that some people abuse gun buy-back programs by deliberately manufacturing pipe guns for pennies and pocketing the money these programs offer [0]. These are real, functional guns, and I promise they're simpler, faster, and cheaper to manufacture than any 3d printed gun.

0: https://www.thefirearmblog.com/blog/2014/11/17/handing-zip-g...


I assume this is mostly for a shotgun shell affair? otherwise the difference in bore, and particularly the seam that is present in almost all steel pipe (unless its drawn-over-mandrel which is a more speciality product), would make it pretty dodgy to fire a proper round


>I can't help but feel this is just Google trying to pull the ladder up behind then and make it more difficult for other companies to collect training data.

I can very easily see this as being Google's reasoning for these actions, but let's not pretend that clandestine residential proxies aren't used for nefarious things. The vast majority of social media networks will ban - or more generally and insiously - shadow ban accounts/IPs that use known proxy IPs. This means that they are gating access to their platforms behind residential IPs (on top of their other various blackboxes and heuristics like fingerprinting). Operators of bot networks thus rely on residential proxy services to engage in their work, which ranges from mundane things like engagement farming to outright dangerous things like political astroturfing, sentiment manipulation, and propaganda dissemination.

LLMs and generative image and video models have made the creation of biased and convincing content trivial and cheap, if not free. The days of "troll farms" is over, and now the greatest expense for a bad actor wishing to influence the world with fake engagement and biased opinions is their access to platforms, which means accounts and internet connections that aren't blacklisted or shadow banned. Account maturity and reputation farming is also feeling a massive boon due to these tools, but as an independent market it also similarly requires internet connections that aren't blacklisted or shadow banned. Residential proxies are the bottleneck for the vast majority of bad actors.


> The vast majority of social media networks will ban - or more generally and insiously - shadow ban accounts/IPs that use known proxy IPs. This means that they are gating access to their platforms behind residential IPs (on top of their other various blackboxes and heuristics like fingerprinting)

Social media will ban proxy IPs, yet gleefully force you to provide your ID if you happen to connect from the wrong patch of land. I find it difficult not to support any and all attempts to bypass such measures.

The fact is that there's now a perfectly legitimate use for residential proxies, and the demand is just going to keep growing as more websites decide to "protect their content", and more governments decide to pass tyrannical laws that force people to mask their IPs. And with demand, comes supply, so don't expect them to go away any time soon.

This really just sounds like a rehash of the argument against encryption. "Bad people use it, so it should go away" - never mind that there are completely legitimate uses for it. Never mind that using a residential proxy might be the only way to get any privacy at all in a future where everyone blocks VPNs and Tor, a future where you may not even be able to post online without an ID depending you where you live, a future which we're swiftly approaching.

It's already here, in fact. Imgur blocks UK users, but it also blocks VPNs and Tor. The only way somebody living in the UK can access Imgur is through a residential proxy.


> The only way somebody living in the UK can access Imgur is through a residential proxy.

And very little of value was lost.

> This really just sounds like a rehash of the argument against encryption. "Bad people use it, so it should go away" - never mind that there are completely legitimate uses for it.

Except that almost everything that uses encryption has some legitimate use. There are pretty much no legitimate uses for residential proxies, and their use in flooding the Internet with crap greatly outweighs that.

If I plumbed a 30cm sewage line straight into your living room would you be happy with it? Okay, well, tell you what, let's make it totally legit - I'll drop a tasty ripe strawberry into the stream of effluent every so often, how about that?


It's another type of proxy. Legitimate uses are the same as for other types of proxies.


What is the endgame here? Obviously "heightened security" in some kind of sense, but to what end and what mechanisms? What is the scope of the work? Is this work meant to secure forges and upstream development processes via more rigid identity verification, or package manager and userspace-level runtime restrictions like code signing? Will there be a push to integrate this work into distributions, organizations, or the kernel itself? Is hardware within the scope of this work, and to what degree?

The website itself is rather vague in its stated goals and mechanisms.


I suspect the endgame is confidential computing for distributed systems. If you are running high value workloads like LLMs in untrusted environments you need to verify integrity. Right now guaranteeing that the compute context hasn't been tampered with is still very hard to orchestrate.


That endgame has so far been quite unreachable. TEE.fail is the latest in a long sequence of "whoever touches the hardware can still attack you".

https://news.ycombinator.com/item?id=45743756

https://arstechnica.com/security/2025/09/intel-and-amd-trust...


No, the endgame is that a small handful of entities or a consortium will effectively "own" Linux because they'll be the only "trusted" systems. Welcome to locked-down "Linux".

You'll be free to run your own Linux, but don't expect it to work outside of niche uses.


Personally for me this is interesting because there needs to be a way where a hardware token providing an identity should interact with a device and software combination which would ensure no tampering between the user who owns the identity and the end result of computing is.

A concrete example of that is electronic ballots, which is a topic I often bump heads with the rest of HN about, where a hardware identity token (an electronic ID provided by the state) can be used to participate in official ballots, while both the citizen and the state can have some assurance that there was nothing interceding between them in a malicious way.

Does that make sense?


No.


Why not? Being terse does not make one right...


Off the top of my head, because

- You're just moving your trust elsewhere, this time to a private corporation (whoever makes the CPU / TPM / other "trusted" component).

- This doesn't guarantee voter anonymity the way paper ballots do. Considering the analog hole and the complexity of computers, I can think of a billion ways a motivated and resourceful Mallory could to connect someone to their ballot.


> This doesn't guarantee voter anonymity the way paper ballots do.

You're saying that with a lot of assurance, but in my opinion that's still to be debated. We can build something that will keep at least a degree of separation between the identity that points to a specific individual and the identity that casts the ballot.



Right... we should not even try because memes...


those who don't understand the memes are doomed to be them


I'd prefer to be the butt of someone's memes rather than not try at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: