Hacker News new | past | comments | ask | show | jobs | submit login

It's possible they were using LLMs (or even just traditional ML algorithms) to choose if a certain webpage was fraud/phishing instead of mere trademark infringement, though. In this case it makes sense that one would be angry that a sapient being didn't first check if the report was accurate before sending it off.





More than the hypothetical risk of Earth being consumed by a paperclip-making machine, I believe the real and present danger in the use of ML and AI technology lies in humans making irresponsible decisions about where and how to apply these technologies.

For example, in my country, we are still dealing with the fallout from a decision made over a decade ago by the Tax Department. They used a poorly designed ML algorithm to screen applicants claiming social benefits for fraudulent activity. This led to several public inquiries and even contributed to the collapse of a government coalition. Tens of thousands of people are still suffering from being wrongly labeled as fraudulent, facing hefty fines and being forced to repay so-called fraudulent benefits.


They’re talking about Australia, and the robodebt scheme.

Read the Wikipedia article and you’ll probably feel outraged.

https://en.wikipedia.org/wiki/Robodebt_scheme


Unfortunately it seems that the thinking is more farspread, and this was the Netherlands [0].

[0] https://news.ycombinator.com/item?id=42365837


Perhaps in certain cases requiring someone to sign off, and take the blame if anything happens, would help alleviate this problem. Much like how engineers need to sign off on construction plans.

(Layman here, obviously.)


If the legal system is not itself either fundamentally corrupted or completely razzle-dazzled by the AI hype... and I mean those as serious clauses that are at least somewhat in question... then there are going to be some very disappointed people losing a lot of money or even going to jail when they find out that as far as the legal system is concerned, there already is legally speaking some person or entity composed of persons (a corporation) responsible for these actions, and it is already not actually legally possible to act like a bull in a china shop and then cover it over by just pointing to your internal AI and disclaiming all responsibility.

The legal system already acts that way when the issue is in its own wheelhouse: https://www.reuters.com/legal/new-york-lawyers-sanctioned-us... The lawyers did not escape by just chuckling in amusement, throwing up their hands, and saying "AIs! Amimrite?"

The system is slow and the legal tests haven't happened yet but personally I see no reason to believe that the legal system isn't going to decide that "the AI" never does anything and that "the AI did it!" will provide absolutely zero cover for any action or liability. If anything it'll be negative as hooking an AI directly up to some action and then providing no human oversight will come to be ipso facto negligence.

I actually consider this one of the more subtle reasons this AI bubble is substantially overblown. The idea of this bubble is that AI will just replace humans wholesale, huzzah, cost savings galore! But if companies program things like, say, customer support with AIs, and can then just deploy their wildest fantasies straight into AIs with no concern about humans being in the loop and turning whistleblower or anything, like, making it literally impossible to contact humans, making it literally impossible to get solutions, and so forth, and if customers push these AIs to give false or dangerous solutions, or agree to certain bargains or whathaveyou, and the end result is you trade lots of expensive support calls for a company-ending class-action lawsuit, the utility of buying the AI services to replace your support staff sharply goes down. Not necessarily to zero. Doesn't have to go to zero. It just makes the idea that you're going to replace your support staff with a couple dozen graphics cards a much more incremental advantage rather than a multiplicative advantage, but the bubble is priced like it's hugely multiplicative.


[flagged]


The comment you are replying to is using commas correctly: it's partitioning off a clause of the sentence as a side phrase that can be removed, and the resulting sentence is a fully grammatically-correct sentence. If you really hate commas, you can replace the commas here with parentheses, but honestly, I prefer the commas here.

I agree with the parent that in formal written English, the comma would be incorrect—cf. the "In compound predicates" section in [0]. But I disagree that the sentence is hard to parse as a result, and I doubt many would think twice about it given that we're on an informal internet message board.

[0] https://www.grammarly.com/blog/punctuation-capitalization/co...


Language is descriptive not prescriptive.

True, but I'm pretty sure this is the majority usage for commas in formal written English.

The comma is a soft pause. We do this all the time in spoken language in order to break up an otherwise potentially ambiguous or hard to understand utterance, but it is basically dialectal. In writing that purposeful pause because a comma.

You can then analyze when such pauses are used in formal language, and infer rules for their use. But those rules aren’t going to be 100% consistent, and a violation of those rules is not a grammatical error in the sense that subject-verb disagreement would be.

TL;DR you said “majority” not “every.” That distinction is key.


Please say something like "here in [insert country name]", or "back home in [insert country name]" intead of "in my country", otherwise we have no idea what you are talking about.

Unfortunately I can no longer edit my original comment. It is about the so-called "Toeslagenaffaire" (childcare benefits scandal)[1] in the Netherlands.

Also, here is a blog post[2] warning about the improper use of algorithmic enforcement tools like the one that was used in this scandal.

[1] https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scand...

[2] https://eulawenforcement.com/?p=7941




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: