Hacker News new | past | comments | ask | show | jobs | submit | esailija's comments login

Because it doesn't understand or have intelligence. It just knows correlations, which is unfortunately very good for fooling people. If there is anything else in there it's because it was explicitly programmed in like 1960's AI.


I disagree. AI in 1960s relied on expert systems where each fact and rule was handcoded by humans. As far as I know LLMs learn on their own on vast bodies of text. There is some level of supervision, but it is bot 1960s AI. That is the reason we get hallucinations as well.

Expert systems are more accurate as they rely on first order logic.


Actually the belief that you're good or "doing altruism" is exactly the kind of thing that would cause a person to do such evil things.

If we take bible metaphorically it's pretty smart because believing you are a inherently unworthy sinner will actually encourage you to do good things instead of the opposite which just causes people to do evil things and virtue signal.

Basically "But we're the good guys" <- that's how you know who is truly evil.


In reality even having a several "deal breakers" means you would need infinite options to find a match that doesn't have any of the so called deal breakers. Most guys don't get to pick from infinite, or even 10s of options. So you need to settle or stay single.


I get that, but I'm just pretty all or nothing. I wandered through the desert for a decade until I found mine. I was 100% willing to stay single the rest of my life rather than start a family with someone who wasn't good to start a family with. Luckily I didn't have to, I understand that that's not an option for many, but honestly I think most men could do better than they think they can, and I don't think enough men have a clear set of behavioral standards, enough foresight or self respect.


Because "settling" is certainly a well documented path to successful long-term relationships...


JITting alone will just make code slower. Even with many optimizations V8 eventually changed the "unoptimizing JIT" to an interpreter and now only JITs hot functions.


More extreme climate events have happened before a single co2 was emitted by human machines.

You could move goal posts again and claim that we shouldn't expect completely unprecedented extreme events from climate change, just that somewhat extreme events are supposedly becoming more common than before. Even if that was true it doesn't matter because it's not convincing and undeniable.


They said for active lifestyle.


yeah, so? there are plenty of fat adapted athletes. and 'active' people don't come anywhere close to the needs of a real athlete.


> Why can't a "purely statistical" process have an understanding?

Once you understand long division, you can do it on infinite numbers without ever having seen the specific numbers. You can get this full understanding just from a handful of examples, no need for terabytes of them.

No matter how many examples of long division examples you fed to statistical model like GPT, there will always be infinite amount of numbers you can tell it where it will give the wrong answer*, unless you cheated and actually hard coded the understanding into the model.

If it cannot understand long division just from few examples it cannot ever understand it. The very reason it needs ridiculous amounts of data is precisely because it cannot understand. If you think it understands you simply aren't trying very hard to confirm otherwise.

* in a way that reveals there is no understanding of long division, obviously a human would also give wrong answer after being awake 100 hours writing numbers on paper


There are a lot of tools that have made developers far more productive than that and it has never resulted in reduction of demand.


It depends what you mean by confirm.

You can raise the bar for confirmation so high that you cannot confirm that you even yourself are not an automation. And you can lower the bar so that other humans are considered sentient.

But in the end, it is only about arbitrary decision where to place the bar for confirmation.


How could we possibly ever test for "not feeling like an automaton?" The only possible test, it seems, is internal subjective experience. Even if something reported this, how would you ever verify it?


This is missing the point so badly I suspect this comment was generated by AI :D


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: