Hacker News new | past | comments | ask | show | jobs | submit | Jevon23's comments login

>1) It's certainly not unheard of for theories have observational or experimental data appears that sends them back to the drawing board for reworking and do eventually get to a consistent state

Sure. But when the socially dominant theory doesn’t fit observations, it’s called “a temporary setback that calls for some reworking”, and when a heterodox theory doesn’t fit observations, it’s called “falling flat on its face”, as you can see in another reply below. That’s not a healthy dynamic.

> There's not some shadowy cabal of cosmologists doing everything in their power to keep the cult of dark matter alive.

No… but curiously, you will get your comment flagged and removed on HN for making such a claim!


>and when a heterodox theory doesn’t fit observations, it’s called “falling flat on its face”, as you can see in another reply below. That’s not a healthy dynamic.

Because none of them get even close to explaining as much as dark matter does. This isn't complicated or a radical shift in standards - it's just requiring something be as good as the existing answer to get serious discussion. Pointing out that dark matter isn't perfect isn't an argument for things that are significantly less perfect than dark matter. There are massive gaps between dark matter and alternative theories. Something that worked as well as dark matter did and only struggled with a similar number of outliers wouldn't be said to fall flat on its face - but nothing is even in the same ballpark as it.

The more that can be explaining by an existing theory, the higher the bar is for any alternative theory to displace it. This is just how science has always worked.

>No… but curiously, you will get your comment flagged and removed on HN for making such a claim!

Because conspiracy theories with no evidence or grounding in reality don't make for intellectually stimulating discussion, I imagine.


Ah, yes, of course HN is in bed with Big Cosmology.


There are some specialized terms here that you're unlikely to encounter outside of an academic philosophy paper, but there's nothing complex about the meanings of any of the individual terms. Once you know what the words mean, it all makes sense.

>eliminativist

Eliminativist claims in philosophy are claims that deny the existence of some class of entities. You can be eliminativist about all sorts of things - numbers, objective morals, countries, tables and chairs, etc.

>qualia

First-person conscious experiences. Pain is a qualia. The way the color blue looks, as opposed to say the color red or green, is a qualia. The sensation of hot or cold is a qualia.

When someone stubs their toe and says "ow", you can infer that they're in pain based on their behavior and your knowledge of how pain works, but you can't actually feel or directly observe their pain. That's the "first-person" part.

>phenomenal consciousness

A synonym for "qualia", because some philosophers started to feel like the word "qualia" had too much historical baggage, so they needed to come up with a new term.

>introspective illusion

Exactly what it says on the tin. An illusion (meaning, an impression that something is real, when it is in fact not) generated by introspection.

So, putting it all together:

>illusionism

Illusionism about consciousness is the thesis that phenomenal consciousness is not real. So, to give a specific example, an illusionist would be committed to the thesis that pain is not real. As a corollary, no one has ever felt pain before, because there is no such thing as pain. People have been under the illusion that they feel pain, but they actually don't.


> First-person conscious experiences. Pain is a qualia. The way the color blue looks, as opposed to say the color red or green, is a qualia. The sensation of hot or cold is a qualia. When someone stubs their toe and says "ow", you can infer that they're in pain based on their behavior and your knowledge of how pain works, but you can't actually feel or directly observe their pain. That's the "first-person" part.

So cool! I’ve always felt there was something really interesting about the idea that someone might internalize the color blue as I see the color red. I know we can define the colors mathematically, but I never knew the term for that subjective interpretive difference—qualia.


>I’ve always felt there was something really interesting about the idea that someone might internalize the color blue as I see the color red.

Yes! In fact, philosophers have spent a lot of time thinking about this exact problem:

https://plato.stanford.edu/entries/qualia-inverted/

https://plato.stanford.edu/entries/qualia-knowledge/


It's a fascinating rabbit-hole, here's one jumping in point...

https://plato.stanford.edu/entries/qualia/


And if two people agree a wall is blocking their path does that elevate the sensation of wall from a quale into a reality?

I know that some members of this community (the “we live in a simulation”-ists) would posit that one person sensing the presence of another is as fabricated as the color “red”!

(Or have I misinterpreted my end of the stick?)


Same as two people blocked from continuing down the hallway of a house they agree feels haunted?


In philosophy there is also the saying:

The modus ponens of one side is the modus tollens of the other side.

Meaning that when one side in philosophy says: from A (their body of arguments) follows B, and A holds, thus B must hold. A&B, A => B is called modus ponens.

Then the other side will say: from A follows B, and B does clearly not hold, thus A does not (or cannot) hold. A&B, ~B => ~A is called modus tollens.

Just wanted to add this here because that's how in my experience the discussion of such topics close to one's self tend to unfold.


Nice breakdown/demystification, thanks!


Every field has jargon and specialized terminology. Do you expect to be able to read any random physics or math paper outside your area of expertise and understand every word?


There are… many people who think that cities are worse off because of cars. Maybe not for the same reasons, but still.


I'm one of them. Taxpayers generally subsidise roads and as you might expect that means we have far too many of them.


You know that ChatGPT writing is still obvious even when you ask it to change its style, right?


Please provide evidence for this claim.


Take a look at the user's comment history. I'd argue the majority of them are generated, it really doesn't take much to recognize the patterns and tics those models commonly display (for now*), and it doesn't seem there was any attempt to make it appear otherwise, too. It may look coherent at first, but there's no deeper meaning or significance, it's noise.


So no evidence. Got it.


Aliens are not woo. Life is a natural phenomenon that is very clearly possible within the known laws of physics. We know life can naturally occur in the universe, because it happened here. Why not somewhere else, too?


Nothing forgotten about him. His work is foundational to modern algebraic geometry and there’s no mathematician who doesn’t know who he is.


Right? It's baffling.


The title is specifically referring to a possible reassessment of his later, quasi-mystical writings, post retirement from mathematics in 1970.

It’s a bit too long for the HN title submission but the actual article title in the Guardian is

“ ‘He was in mystic delirium’: was this hermit mathematician a forgotten genius whose ideas could transform AI – or a lonely madman?”


Why not both? Crazy people can be smart. As for mystic delirium well that's our modern sensibilities talking. Hell there are people today who believe in Marian apparitions.


Wasn't Newton a mystic too? Yes, he was [1].

[1] https://en.wikipedia.org/wiki/Isaac_Newton%27s_occult_studie...


He left IHÉS around 1970 but he stayed active in math, and he kept working as a math professor and later at the CNRS (French research institute).

https://en.wikipedia.org/wiki/Alexander_Grothendieck#Manuscr...


ChatGPT comment?


It is, and I'm curious what dang and HN's plan is wrt this issue going forward. On one hand, the "assume good faith" has been a core tenet of this community. At the same time, LLM-generated walls of text aren't good faith. And they're not going to get less common from here on out.

I'm also surprised by how many human replies these comments get, seemingly unaware what they're responding to, given that it's HN and how long it's been since the release of GPT-3, I thought a larger percentage of readers would notice.


>> ChatGPT comment?

> It is

What? Huh? How did you determine this?


Very obvious ChatGPT style and structure. Here's another one of his comments copy/pasted from ChatGPT. Many others have called him out on this. He is a pathological liar.

https://news.ycombinator.com/item?id=41274200 and then his later reply in the same thread, also written by ChatGPT https://news.ycombinator.com/item?id=41277573


It's truly a rorschach test of sorts. I agree with you that there isn't enough information to say, but reading through the comment history of the commenter in question does not make it seem more likely that they are GPT. Reminds me of Fallout 4 with everyone suspicious of each other being synths.


On the contrary, the comment history makes it very clear.

Pages and pages of relatively short comments, not a single one written in a remotely LLM-reminiscent style. Then, within a very short period, multiple very long comments in exactly the default style that GPT writes in.

The chances of someone waking up some day and entirely changing their writing style might as well be zero, I've never seen it. It would be a gradual process if everyone.

I read HN every day and I think this is only the 2nd time I've come across clearly generated content. If suspicion is the issue, that should be much more frequent. On Reddit it's already more common, and I've already had multiple people admit to it when pointed out, asking "How did you know?".

It does help that I've spent the last 1.5 years building LLM-based products every day.


Is the redundancy giving you the hint it’s GPT? I would love to know what it is that has convinced you but seemingly cannot explain.


A few weeks ago I had a eureka moment to describe it: GPT writes just like a non-native speaker who has spent the last month at a cram school purely aimed at acing the writing part of the TOEFL/IELTS test to study abroad. There, they absolutely cram repeatable patterns, which are easy to remember, score well and can be used in a variety of situations. Those patterns are not even unnatural - at times, native speakers do indeed use them too.

The problem is dosage. GPT and cram school students use such patterns in the majority of their sentences. Fluent speakers/humans only use them once every while. The temperature is much higher! English is a huge language grammatically, super dynamic - there's a massive variety of sentence structures to choose from. But by default, LLMs just choose whichever one is the most likely given the dataset it has been trained on (and RLHF etc), that's the whole idea. In real life, everyone's dataset and feedback are different. My most likely grammar pattern is not yours. Yet with LLMs, by default, its always the same.

It also makes perfect sense in a different way; at this point in time LLMs are still largely developed to beat very simplistic benchmarks using their "default" output. And English language exams are super similar to those benchmarks; I wouldn't be surprised if they were actually already included. So the optimal strategy to do well at those without actually understanding what's going on, but pretending to do so, ends up being the same. Just in this case it's LLM's pretending instead of students.

I should probably write a blog post about this at some point. Some might be curious: Does this mean that it's not possible to make LLMs write in a natural way? No, it's already very possible, and it doesn't take too much effort to make it do so. I'm currently developing a pico-SaaS that does just that, inspired by seeing these comments on Reddit, and now HN. Don't worry, I absolutely won't be offering API access and will be limiting usage to ensure it's only usable for humans, so no contributing to robotic AI spam from me.

I'd give you concrete examples, but in the comment in question literally every single sentence is a good example. Literally after the second sentence, the deal is sealed.

There's other strong indicators besides the structure - phrasings, cadence, sentence lengths and just content in general, but you don't even really need those. If you don't see it, instead of looking at it as a paragraph, split them up and put each sentence on a newline. If you still don't see it, you could try searching for something like "English writing test essay template".

I remember that there were "leaks" out of OpenAI that they had an LLM detector which was 99.9% accurate but they didn't want to release it. No idea about the veracity, but I very much believe it, though it's 100% going to be limited to those using very basic prompts like "write me a comment/essay/post about ___". I'm pretty sure I could build one of these myself quite easily, but it'll be pointless very soon anyway, as LLM UIs improve and LLM platforms will start providing "natural" personas as the norm.


> I'd give you concrete examples, but in the comment in question literally every single sentence is a good example. Literally after the second sentence, the deal is sealed.

I dunno. I believe you see that in it, but to me it just reads like any other Internet comment. Nothing at all stands out about it, to me. Hence my surprise at the strong assertions by two separate commenters.


Genuinely fascinating! I'd show you the instances on Reddit of similar comments where people admitted it after I pointed it out, but unfortunately I don't really want to link my accounts.

You're also free to confirm in my HN history that in hundreds of comments (and tens of thousands read), this is the single one time I've pointed it out. I did do a cross-check of their profile to confirm it, just in case it was a false positive - don't want to accuse anyone if I'm not 100% sure, because it's indeed technically possible that someone simply has the exact same writing style as the default ChatGPT assistant.

Here's the entire comment, dissected to make the structure and patterns clearer.

> As a __, __.

> However, it's crucial that ___.

> While __, ___ is risky.

> ___ highlight the importance of __ in __.

> We need to ___ to __.

Any single one of these sentences on their own wouldn't be enough. It's the combination, the dosage that I mentioned.

If you're interested in hard data that explores this phenomenon (although outdated/an older version of GPT), here's an article [1]. In a year or so, if you do the same analysis on "However, it's crucial that", you'll discover the same trend as the article showed for "a complex and multifaceted concept". Maybe the author would be open to sharing the code, or rerunning the experiment.

[1] https://blog.j11y.io/2023-11-22_multifaceted/


As you said earlier:

> It does help that I've spent the last 1.5 years building LLM-based products every day.

it could be an exposure thing. I don't interface with this AI stuff at all, much less every day. So I'm not going to pick up on patterns like that.


I've used ChatGPT extensively and this stuff is extremely obvious after you have read literally thousands of ChatGPT responses. Immediately recognized it and called him out. Boilerplate AI template response. ChatGPT has a very distinctive way of answering questions.


Thank you for the lesson!


Video is not the standard medium of communication in academic philosophy. I imagine the GP mentioned youtube because most people are more likely to watch a video than read a paper.

Bernardo Kastrup has a bunch of essays/books up for free at his website https://www.bernardokastrup.com/p/papers.html?m=1


Or GP himself watches these videos. And I would push back on the claim that most posters here are more likely to watch a Youtube video than read an article.


I was thinking the same thing, I can't stand how slow video is, much easier to read text.


To be clear, I don’t sit there for hours watching videos on YouTube (I have a busy career and a family so that’s not an option these days).

I do consume a lot of YouTube content as audio-only when driving or exercising.

I find video particularly satisfying for this topic and these figures, because much of the most valuable insight emerges through discussion and debate.

I’ve read Kastrup’s book “Why Materialism is Baloney” and found it very satisfying - but I was already amenable to his position; I don’t imagine it would be persuasive to an entrenched skeptic.


Biology isn’t magic, but it does do a heck of a lot of amazing things that we don’t understand yet.

We haven’t even been able to reproduce abiogenesis.


Any sufficiently evolved biological process is indistinguishable from magic.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: