Hacker Newsnew | past | comments | ask | show | jobs | submit | hackinthebochs's commentslogin

Kids as in under 18 teenagers? Yeah sure, why not?

Because parasocial relationships with ewhores isn't healthy, particularly at a stage in their life when they should be forming real relationships with their peers.

Scrolling through attractive women (generally the thirst-traps are women) doesn't imply forming a parasocial relationship. I agree that parasocial relationships are bad, but this is independent of them being thirst-traps. Internet thirst-traps are just the modern equivalent of sneaking a look at a playboy mag or a lingerie catalogue. Nothing inherently damaging about it. The scale of modern social media can make otherwise innocuous stimuli damaging, but this is also independent of it being content of sexy women.

“Nothing inherently damaging about it.”

I hear this claim from the pornsick but I’d like to see all the studies backing it up.


It's important to distinguish women looking sexy (generally not naked) from porn. Somehow the distinctions get blurred in these discussions.

You are the one claiming there's a problem, and you are the one (presumably) demanding legal and other action to deal with that "problem". That means that any burden of proof is 1000 percent on you.

... and before you haul out crap from BYU or whatever, be aware that some of us have actually read that work and know how poor it is.


Parasocial relationships are a different topic than pornography.

Are you saying that the intersection is uniquely bad? In either case limits to content made in an effort to minimize parasocial relationships cut across very different lines than if the goal is minimizing access to porn.


Parasocial relationships and getting sucked in by thirst traps on social media are inseparable.

I have a dumb question, but how do ewhores capitalize on this? Do they have teens running captcha farms or something?

They farm simps.

Can you explain how that's profitable when these people don't even have jobs? I believe you, I just don't understand how it works.

The people get shown advertising, and the advertisers are the ones paying money.

[flagged]


These people come out of the woodwork, when it comes to defending porn. It’s their whole identity. And unfortunately the tech scene is infested with these types.

It's goalpost shifting. If the concern is parasocail relationships to content creators formed with pornography as the hook, then pornographic content where the actors aren't cultivating or interacting with a social media followerbase should be better, right?

Do I care when both are dangerously stupid to hook kids on?

Then support them. Too often you show up to scream "think of the children" without actually citing any research or empirical damage. If you refuse to argue in good faith and don't want to be told you're wrong, voting is the only thing you're capable of doing. Don't tell us about it, vote.

Everyone knows those laws do nothing, though; go look at the countries that pass them. Kids share pornography P2P, they burn porno to CDs and VHS tapes and bring in pornographic magazines to school. They AirDrop pornographic contents to their friends and visit websites with pornography on them too. Worst-case scenario, they create a secondary market for illegal pornography because they'll be punished regardless - which quickly becomes a vehicle for creating CSAM and other truly reprehensible materials.

They don't do it because they're misogynistic, mentally vulnerable or lack perspective - they do it because they're horny. Every kid who aspires to be an adult inherently exists on a collision course with sexual autonomy, most people realize it during puberty. If you frustrate their process of interacting with adulthood, you get socially stunted creeps who can't respond to adult concepts.


I skimmed the paper; it has a lot of issues. First it doesn't attempt to frame the theory in our current best understanding of the problem of consciousness. It doesn't say up front if its attempting to explain qualia or just give a new understanding of the functional aspects of consciousness.

The next issue is that it doesn't do much explaining. If it is attempting to explain qualia, it needs to explain how the functional descriptions on offer help in explaining why there is a qualitative feel associated with conscious states. If it's not attempting to explain qualia, then it needs to clearly identify the functional problem it is proposing to solve, then explain how the theory solves it. Many homegrown theories mistake description for explanation. Just giving existing functions a new name in the guise of a new framework doesn't explain anything. A reframing can be useful, but it should be made explicit that the theory is a reframing rather than an explanation, and what benefits this framing gives to solving various problems related to consciousness.

Another issue is that it spends too much time talking about implications and not enough time just communicating the core ideas. Each major section has like a paragraph or two. This isn't enough for a proper introduction to the section, let alone a sufficient description of the theory.


Back to qualia: in my opinion, and your mileage obviously varies, it’s not even a wild goose chase — it’s more like The Hunting of the Snark.

Consciousness isn’t just a spotlight, it’s the forced arbitration of billions of cellular demands. Each of our ~40 trillion cells has a survival stake and pushes its signals upward until the mind must take notice. That’s why certain experiences intrude on us whether we like it or not: grief that overwhelms reason, sexual arousal that derails attention, the impossibility of not laughing at an inappropriate moment, or the heat of embarrassment that turns thought itself into a hostage.

In that sense, qualia aren’t mystical paint on top of neural function — they’re the felt residue of our cells voting, insisting their needs be weighed in the conscious workspace. The Predictive Timeline Simulation framework is my attempt to make that arbitration explicit — testable in neuroscience, relevant to psychiatry, and useful for AI models.

Perhaps read the paper instead of skimming or running it through an AI. I believe that your complete understanding would either sharpen your criticisms or perhaps improve the paper.


> "A reframing can be useful, but it should be made explicit that the theory is a reframing rather than an explanation, and what benefits this framing gives"

Well said, and the rest. Thanks.


Fair critique — and I’ll own that the paper emphasizes reframing more than exhaustive exposition. To be precise:

• I am not claiming to solve the Hard Problem of qualia. I position qualia as an evolved data format, a functional necessity for navigating a deterministic universe — not as metaphysical mystery. • What the paper does aim to explain is the predictive, timeline-simulating function of consciousness, and how errors in this function (e.g. Simulation Misfiling) may map to psychiatric conditions. • The “implications” section is deliberately forward-looking, but I agree the exposition could be expanded. That’s the next step — this is a framework, not the final word.

If nothing else, I hope the paper makes explicit that reframing consciousness as a predictive timeline simulator is testable, bridges physics + neuroscience, and invites experiments rather than mysticism.


We need a name for the phenomenon where a popularization of a term is more narrow than its original usage and then people who only encountered the popularized word insist that the narrow application is its only meaning.

'ignorance'?

Source code doesn't inherently contain the "why" of the operations. Code itself is an engineering artifact, so recovering the why is a kind of reverse engineering.

You guys are being obtuse. Engineering is turning a spec into a more technical artifact, whether that's source code, machine code, physical hardware or something else. Reverse engineering is then reserving the process of engineering, recovering the semantic artifact from the engineering artifact. That the OP is using the term in the sense of recovering the semantic insights from the cuda kernels is a fine application of the concept.

You're right. I've been comparing it to 1984 all this time, when in fact it literally is 1984 just with modern technology. It's interesting how the story 1984 strikes a chord in many people, but something like Chat Control just seems normal. I guess having a camera in your house feels more invasive on a visceral level, despite the fact that we're now putting our whole lives on our phones and online services.


>And we pulled ourselves out of this nonsense by the bootstraps.

Human progress was promoted by having to interact with a physical world that anchored our ramblings and gave us a reward function for coherence and cooperation. LLMs would need some analogous anchoring for it to progress beyond incoherent babble.


True, but LLMs got anchored to reality because we are using them in real world tasks and this connection will only grow richer, wider and faster.


When it comes to logical reasoning, the difficulty isn't about having enough new information, but about ensuring the LLMs capture the right information. The problem LLMs have with learning logical reasoning from standard training is that they learn spurious relationships between the context and the next token, undermining its ability to learn fully general logical reasoning. Synthetic data helps because spurious associations are undermined by the randomness inherent in the synthetic data, forcing the model to find the right generic reasoning steps.


I agree! DeepSeek has shown this is incredibly powerful. I think their Qwen 8B model may be as good as GPT4’s flagship. And I can run it on my laptop if it’s not on my lap. But the amount of synthetic data you can generate is bounded by the raw information, so I don’t think it’s an answer to the SP.


Neural precursor cells literally move themselves from where they first differentiate to their final location to ensure specific neural structures and information dynamics in the developed brain. It's not declarative memory, but its a memory of the neural architecture etched out over evolutionary time.


I don't think that quote from Carmack represents some deeply considered conclusion. He started off his efforts with embodiment. He either never considered LLMs a path towards AGI, or thought he didn't personally have anything to contribute to LLMs (he talked about it early on in his journey but I don't remember the specifics). He didn't spend a year investigating LLMs and then decide that they weren't the path to AGI. The point is that he has no special insight regarding LLMs relationship to AGI and its misleading to imply that his current effort towards building AGI that eschew LLMs is an expert opinion.


Yes, I meant to say that, for Carmack, no type of modern AI research has figured out the path to actual general intelligence. I just didn't want to use the meaningless "AI" buzzword, and these days all the focus and money is on large language models, especially when talking about the end goal of AGI.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: