Hacker News new | past | comments | ask | show | jobs | submit login

And who decides when that "should" condition is met? Over the past few years, I've seen far too many activists react to algorithms that notice true but politically inconvenient things by trying to shut down the algorithms, to pull wool over our eyes, to continue the illusion that things are other than what they are. Why should we keep doing that?

I have zero faith in the ability of activists or states to decide when it's safe to deploy some new technology. Only the people can decide that.




> to algorithms that notice true but politically inconvenient

I don't know that I agree that there exists an algorithm that can determine what is factually true, so I'm not sure I agree that an algorithm can "notice a true thing".

Do you have an example of when an algorithm noticed something that was objectively true but was shut down? Can you explain how the algorithm took notice of the fact that was objectively true (in such a way that all parties agree with the truth of the fact)?

I can't think of a single example of an algorithm determining or taking notice of an objective fact that was rejected in this way. But there are lots of controversies I'm not aware of, so it could have slipped by me.


For example gender stereotyping for jobs or personal traits, that is politically incorrect but nevertheless reflects the corpus of training data. (He is smart. She is beautiful. He is a doctor. She is a homemaker.)


> nevertheless reflects the corpus of training data

I don't think "reflects the corpus of training data" gets entails "is therefore an objectively true fact". In fact, I think a lot of people that complain about AI gender stereotyping for jobs or personal traits would _exactly describe the problem_ as AI is "reflecting the corpus of training data".

I don't think anyone disagrees that the AI/ML learns things reflected in the corpus of training data.

My request for an example is one where the AI/ML is reflecting an "objective truth" about reality and people then object to that output. But "the AI/ML is reflecting an objective truth about the training data" fails to satisfy my request for an example, because it falls short of demonstrating that the training data was an accurate and objective reflection of reality.


"Handsome lady/woman/girl" vs "Beautiful gentleman/man/boy". Try in Google Search.

We're just using different adjectives based on gender.


I think you're assuming that if it's in the data, it's "factually true" as the OP puts it. It doesn't work that way. There is such a thing as sampling error, for example.


Decisions like these need to made slowly and societally and over time.

Tension between small-c conservatism that resists change and innovators who push for it before the results can be known is very important!

No one person or group needs to or will decide. Definitely not states. "Activists" both in favor of and opposed to changes will be part of it. The last few decades in tech the conservative impulse has been mostly missing (at least in terms of the application of technology to our society lol) and look where we are. A techno-dystopian greed powered corporate surveillance state.

We're not going to vote on it. Arguments like the one happening in this comments section _is_ the process for better or worse.


We also don't have to make the same decision for all use of AI.

For example, we should be much more cautious about using AI to decide "who should get pulled over for a traffic stop" or "how long a sentence should someone get after a conviction". Many government uses of AI are deeply concerning and absolutely should move more slowly. And government uses of AI should absolutely be a society-level decision.

For uses of AI that select between people (e.g. hiring mechanisms), even outside of government applications, we already have regulations in that area, regarding discrimination. We don't need anything new there, we just need to make it explicitly clear that using an opaque AI does not absolve you from non-discrimination regulations.

To pick a random example, if you used AI to determine "which service phonecalls should we answer quicker", and the net effect of that AI results in systematically longer/shorter hold times that correlate with a protected class, that's absolutely a problem that should be handled by existing non-discrimination regulations, just as if you had an in-person queue and systematically waved members of one group to the front.

We don't need to be nearly as cautious about AIs doing more innocuous things, where consequences and stakes are much lower, and where a protected class isn't involved. And in particular, non-government uses of AI shouldn't necessarily be society-level decisions. If you don't like how one product or service uses AI, you can use a different one. You don't have that choice when it comes to hiring mechanisms, or interactions with government services or officials.

Reading the article, it sounds like many of the proposals under consideration are consistent with that: they're looking closely at potentially problematic uses of AI, not restricting usage of AI in general.


Not 100% sure what the parent is talking about, but my first thought is the predictive policing algorithms used in some jurisdictions to set bail and make parole decisions. My hazy understanding of the controversy is that these algorithms have "correctly" deduced that people of color are more likely to reoffend, thus they set bail higher or refuse release on parole disproportionately. At one fairly low level this algorithm has noticed something "true but politically inconvenient", but at a higher level, it is completely blind to the larger societal context and the structural racism that contributes to the racial makeup of convicted criminals. I'd argue that calling this simply "true" is neglecting a lot of important discussion.

Of course, perhaps the parent is referring to something else. I'd also like to see some examples.


> completely blind to the larger societal context and the structural racism that contributes to the racial makeup of convicted criminals

What happens if you and others have conflicting views on the "larger societal context"? Who wins, obviously you because you are right?

AI has become political football now and everyone with an issue finds an AI angle. In all this game few are actually interested in AI itself.


> true but politically inconvenient

Wanna give some examples or are you just dog-whistling?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: