Hacker News new | past | comments | ask | show | jobs | submit login

@dang Please consider that this is an important and well sourced article regarding military use of AI and machine learning and shouldn't disappear because some users find it upsetting.



I wrote about this here: https://news.ycombinator.com/item?id=39920732. If you take a look at that and the links there, and still have a question that isn't answered, I'd be happy to hear it.


[flagged]


That is not the case at all.

See this comment from dang:

https://news.ycombinator.com/item?id=39435024

There are more comments like this from him, you can find them using Algolia.

HN is not acting in bad faith whatsoever.

This story in particular “qualifies” for what would be interesting to HN readers while taking into account the sensitivity of the subject.

I fully expect the discussion to be allowed and the flag lifted, but HN mod team is very small and it might take a while - it quite literally always does.


Agreed. Also take into account how this and a few mirror discussions are rapidly degrading into “x are bad” political discussions which are just not that intere here.


People believing admins when they claim moderation and censorship is out of their hands and the result of a faulty system they have no control over, has to take the cake for this years distortion of reality.

Fact is very specific topics are routinely being suppressed systematically.


Do you have any proof of this?


[flagged]


I would like to know if the AI is deciding to starve the entire population and kill aid workers?

It’s a serious question, because the article mentions how AI plays such a crucial role… but where does it end?

I know the following question sounds absurd, but they say there’s no such thing as a silly question…

Does the AI use regular power to run, or does it run on the tears and blood of combatant 3 year old children - I mean terrorists?


> I know the following question sounds absurd, but they say there’s no such thing as a silly question…

People say a lot of things.

Some questions are ill-posed; some bake-in false assumptions.

What you do _after_ you concoct a question is important. Is it worth our time answering?

> Does the AI use regular power to run, or does it run on the tears and blood of combatant 3 year old children - I mean terrorists?

From where I sit, the question above appears to be driven mostly by rhetoric or confusion. I'm interested in reasoning, evidence, and philosophy. I don't see much of that in the question above. There are better questions, such as:

To what degree does a particular AI system have values? To what degree are these values aligned with people's values? And which people are we talking about? How are the values of many people aggregated? And how do we know with confidence that this is so?


Sorry, I was looking for answers to my questions - not more questions.


If you believe your question is worth pursuing, then do so. From where I sit, it was ill-posed at best, most likely just heated rhetoric, and maybe even pure confusion. But I was willing to spend some time in the hopes of pointing you in a better* direction.

You can burn tremendous time on poorly-framed questions, but why do that? Perhaps you don't want to answer the question, though, because you didn't ask it. You get to ask questions of us, but don't reply to follow-up questions that push back on your point of view?

* Subjectively better, of course, from my point of view. But not just some wing-nut point of view. What I'm saying is aligned with many (if not most) deep thinkers you'll come across. In short, if you ask poorly-framed questions, you'll get nonsense out the other end.

P.S. Your profile says "I’m too honest. Deal with it." which invites reciprocity.


[flagged]


>The HN crowd is overly enthusiastic to see Jews die, if anything.

I wouldn't be so quick to say that. I would guess that 99.99999% of us at a bare minimum don't want to see any innocent people die, regardless of ethnicity, religious creed, nationality, etc. In fact, I'd wager my life savings and my company on the guess that most rational adults don't want to see innocent people die regardless of where in the world they are. HN is no different.

Israel is not 100% scot-free and innocent here, and that needs to be stressed. I don't condone Hamas's behavior at all (it's abhorrent), nor do I condone bombing a clearly-marked vehicle delivering humanitarian aid (also abhorrent).

Also, Israel =! all jewish people world wide. You'll find some of Israel's largest criticisms come from non-Israeli Jews.


Should have the ability to turn off comments for these.


HN exists for us to comment on articles. The majority of comments are from folks who didn't even read the article (and that's fine).

Turning off comments makes as much sense as just posting the heading and no link or attribution.


Well, this post is surely going to get removed because of flaming in comments, so, which is better, post with no comments, or no post at all?


> Well, this post is surely going to get removed because of flaming in comments

This is one prediction of many possible outcomes.

Independent of the probability of a negative downstream outcome:

1. It is preferable to correct the unwelcome behavior itself, not the acceptable events simply preceding it (that are non-causal). For example, we denounce when a bully punches a kid, not that the kid stood his ground.*

2. We don't want to create a self-fulfilling prophecy in the form of self-censorship.

* I'm not dogmatic on this. There are interesting situations with blurry lines. For example, consider defensive driving, where it is rational to anticipate risky behavior from other drivers and proactively guard against it, rather than waiting for an accident to happen.


Having civil conversation and banning aggressively those who can't be adults?


> so, which is better, post with no comments, or no post at all?

The false choice dilemma is dead. Long live the false choice dilemma!


The goal of that being?


[flagged]


They tend to remove posts causing flame in comments


It’s the most important thing going on in the world. And geeks shouldn’t be thought of as people who will sit and think how cool the death machine AI that Israel has developed which chooses how and when 30K children die… geeks make this tech, profitise from it, and lurk about HN and when it comes to facing the reality of their creations they want to close the conversation down, flag comments, and evade the hurty real world reality of it. Sad. And pathetic. I’m not saying you are personally.


> They tend to remove posts causing flame in comments

It can be fun to consider the precise and comprehensive truth value of such statements (or, the very nature of "reality" for extra fun) using strict, set theory based non-binary logic.

It can also be not fun. Or sometimes even dangerous.


It's more about being able to have a civilized conversation in some topics.


Civilized conversation is limited by the emotional stability of those having it.

People have it so easy now they've grown up and spent their entire lives in total comfort and without even the slightest hint of adversarial interaction. So when they encounter it, they overreact and panic at the slightest bit of scrutiny rather than behave like reasonable adults.


Plenty of us are capable of having civilized conversation on these topics.

If you can't, you should be banned. The problem will work itself out over time.


I was talking in general, not about myself.

I agree with you, this conversations should be had. But unfortunately a small, but comitted, minority can (and often will) turn the comments on sensitive topics into a toxic cesspool.


Why not just treat it as a way for undesirable guests to reveal themselves? Sounds like HN never wanted these guests and doesn't have the administrative attention to be watching all the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: