Hacker Newsnew | past | comments | ask | show | jobs | submit | squidbeak's commentslogin

Could you list some for the benefit of those of us who haven't seen any?


This sounds quite like Non 24 hr Sleep Wake Disorder.

https://en.wikipedia.org/wiki/Non-24-hour_sleep%E2%80%93wake...


What a term to use about anyone let alone people you suppose to be abuse victims. This is shameful.


are you unaware of the meaning of quote marks? They are quoting labels that society will place on them, primarily as a consequence of puritanical thinking acting as a cover up for abuse. What's shameful is hiding the horrors of our reality. I thought their comment was particularly poignant and reflects the actual horrors of abuse when it is uncovered in retrospect, compared to how it was perceived at the time.

We see this countless times in our history, abusers lauded, praised, with status, titles, wealth and popular acclaim. Detractors are ignored, slandered and side-lined, and after the abusers die, it transpires all those hushed whispers were true and the detractors were right all along.


The quotation marks weren't there originally. It read:

> shamed as town sluts

Though it was still clear what the writer meant. I'm surprised that someone ran with an uncharitable interpretation like they did.


I'm not. Willful (or motivated ignorance based) misinterpretation to create a strawman and then tearing that down in ways that cater to the community's biases is dirt common "bad behavior" in any internet comment section where contributions are scored like they are here.


ah, thank you for the extra context. I appreciate knowing that, its certainly an easier mistake to make without the quotes.

> I'm surprised that someone ran with an uncharitable interpretation like they did.

I am less so, maybe I'm getting old and falling into elderly tropes, but I feel like there's a growing uptick in society with people seeking a platform to moralise, while skimming the content and not understanding it. The short-cuts that were originally just amber/red flags (e.g. like the casualness of throwing out a harsh label like "slut") are starting to become the offense, as opposed to the actual behaviour (the underlying cruelty) that they originally hinted at.


Too late to edit but I meant to say town "sluts". Ah well, a lesson to re-read carefully before posting


I've edited your GP comment to say what I believe you meant, but if I got it wrong, please let us know.


You should be able to edit it now, or email us (hn@ycombinator.com) with an edit we can put in. Probably best to find a different word/phrase to use. It's upsetting to people even if you didn't mean it that way.


It made me wince as well, but I doubt that the intent was malicious.


I'm not aware of any doubt or questioning of AI being silenced on HN. Almost every thread about AI seems to be pretty loud with voices saying "bubble", "hype", "stochastic parrot".

Being impressed with the capabilities of an emerging technology doesn't make the person a religious zealot, and dismissing every proponent of LLMs and AI generally that way is weak and tribal.

I really don't understand how this topic has become such a badge of identity.


The way I see it, the identity that, dare I say most people so engaged with the debate here, is that of our programming ability, and naturally, we mostly believe that we're above average at it. Some people have found AI to be useful, others have not. In calling LLMs mere stoichastic parrots, the implication is that someone who could possibly find them as useful is an idiot, and a bad programmer. Conversely, people see the people who call them soichastic parrots as bad programmers who are full of themselves and are too stupid to know a useful tool because they're too blinded by... whatever. When you have two tribes trying to exchange comments with that underlying implication from both sides, there's not really room for productive discussion

There's a their group of people who see these things as tools, and want to learn about them. How to use them well; their shortcomings, but as usual, the reasonable voices get drowned out by loud rants and diatribes.


Just look at this submission. It's highly critical of AI and made it algorithmically to the HN front page, but was censored within minutes by our HN moderators and downranked into oblivion. This happens regularly with articles that are highly critical of AI. The moderators have the final say here. HN is not a fully egalitarian platform.


And now "flagged", because apparently, an informed interview with a 19-years veteran who worked on AI is problematic (whereas Sam Altman's or Elon Musk's PR and bullshitting never is).


This is correct. The idea is that money spent by the wealthy enjoying themselves or making themselves wealthier through investments eventually reaches the poorest - in some form.

Quite why we've persuaded ourselves we need to do this through a remote & deaf middleman is anyone's guess, when governments we elect could just direct money through policies we can all argue about and nudge in our own small ways.


The user minds when the app gets in the way of the document.


Why do we build HTML first websites? Because most websites serve documents, however much developers might wish they were applications.


Search results for its designer, Vincent Connare, also use it. Nice tribute.

https://www.google.com/search?q=vincent+connare


> It's also off-the-charts implausible to say that our performance on adding up substantially degrades with the introduction of irrelevant information

Didn't you ever sit an exam next to a irresistibly gorgeous girl? Or haven't you ever gone to work in the middle of a personal crisis? Or filled out a form while people were rowing in your office? Or written code with a pneumatic drill and banging outdoors?

That's the kind of irrelevant information in our working context that will often degrade human performance. Can you really argue noise in a prompt is any different?


"Intelligence" is a metaphor used to describe LLMs (, AI) used by those who have never studied intelligence.

If you had studied intelligence as a science of systems which are intelligent (ie., animals, people, etc.) then this comparison would seem absurd to you; mendacious and designed to confound.

The desperation to find some scenario in which, at the most extreme superficial level, an intelligent agent "benchmarks like an LLM" is a pathology of thinking designed to lure the gullible into credulousness.

If an LLM is said to benchmark on arithmetic like a person doing math whilst being tortured, then the LLM cannot do math -- just as a person being tortured cannot. I cannot begin to think what this is supposed to show.

LLMs, and all statistical learners based on interpolating historical data, have a dramatic sensitivity to permuting their inputs such that they collapse in performance. A small permutation to the input is, if we must analogise, "like toturing a person to the point their mind ceases to function". Because these learners do not have representations of the underlying problem domain which are fit to the "natural, composable, general" structures of that domain ---- they are just fragmaents of text data put in a blender. You'll get performance only when that blender isnt being nudged.

The reason one needs to harm a person to a point they are profoundly disabled and cannot think, to get this kind of performance -- is that at this point, a person cannot be said to be using their mind at all.

This is why the analogy holds in a very superficial way: because LLMs do not analogise to functioning minds; they are not minds at all.


You seem to be replying to a completely different post. You'll see I didn't once use the term 'intelligence', so the reprimand you lead with about the use of that term is pretty odd.

The ramble that follows has its curiosities, not least the compulsion you have to demean or insult your 'gullible', 'credulous' opponents, but is otherwise far from any point. The contention of yours I was replying to was your curiously absolute statement that human performance doesn't degrade with the introduction of irrelevant information. I gave you instances any of us can relate to where it definitely does degrade. Rather than dispute my point, you've allowed some kind of 'extra information' to bounce you around irrelevancies from one tangent to the next - through torture, blenders, animals as systems, etc etc. What you've actually done, quite beautifully, is restate my point for me.


I may not agree with you but I appreciate your efforts to call out demaning and absolutist language on HN. It really drags the discussion down.


So you strawman'd my claim about degradation of performance to one in which "substantial", "irrelevant" and "almost all cases" have no flexibility to circumscribe scenarios, so that i must be making a universal claim... And then you take issue with my reply?

Why would you think that I'd deny that you can't find scenarios in which performance substantially degrades? Would I not countenance toture? As in my reply?

My reply is against your presumption that an appropriate response to the spirit-and-plain-meaning of my argument is to "go and find another scenario". It is this presumption, when addressed, short-circuits this scenario-finding dialogue: In my reply I address the whole families of scenarios you are appealing to where we fail to function well and show why there existence remains irrelevant to our analysis of llms


What do you imagine the purpose of these models' development is if not to rival or exceed human capabilities?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: