Hacker Newsnew | past | comments | ask | show | jobs | submit | DarkWiiPlayer's commentslogin

Oh this is a funny topic; I just found myself looking for a decent music player on linux like a month or so ago and the situation was... disappointing.

The nicest looking one I could find was amberol, but that was a bit too minimalistic for me. I like minimal UIs but that doesn't have to translate to minimal feature sets as well.

But in the end I didn't find any simple but hackable players that I liked; in the end I just settled on audacious because it's just simple enough in terms of UI and good enough in terms of features. I do like the playlists as tabs idea though.


Cyberpunk meets opium wars...

Actually sounds like a not so bad setting for a book/game/movie ngl; sure sounds like a garbage setting for a world to actually live in.


Same here; I'm all for a "ban" but it doesn't have to be all social media, just force them to use a simple rules-based algorithm for minors.

But meh, it's a broader issue anyway. Just look at the puritanical obsession some people have with pornography too.

Young people these days are getting infantilised way too much imho and that's just not healthy. There needs to be a safe environment to transition into adulthood with gradual exposure to all kinds of things, rather than turning 18 and suddenly being a different category of person entirely.


...or that anyone who thinks "I'd start a company if I could become the next Apple, but otherwise it's pointless" is someone you want running a company.


With cookies disabled I get a blank website, which is a massive red flag and an immediate nope from me.

Can't imagine someone incapable of building a website would deliver a good (digital) product.


They did build a website though. It even looks pretty nice. The restriction you've placed on yourself just prevents you from viewing it.


But.. but.... we MUST track you! That's the whole purpose of our site /s


Indeed; this is like stealing the icemen's ice to fuel refrigeration. If some technology makes your job (partly) obsolete, too bad for you. But you shouldn't be forced to contribute to this technology against your will.


My opinion continues to be that AI companies should have to prove that they have consent to use any and all data their models are trained on.

That is, be able to prove a) that their models were actually trained on the data they claim, b) that they have consent to use said data for AI training, and c) that this consent was given by the actual author or with the author's consent.

I want platforms like soundcloud, youtube, etc. to be required to actually send out an e-mail to all of its users "hey we will be using your content for AI training, please click here to give permission".


Even if you can enforce this somehow, other countries will not. Unlike copyright and patent law in consumer products and content - getting an upper hand in AI race could have huge implications down the line. So the only government that would enforce this is the one that has no chance of competing in this space in the first place (EU)


Let’s be honest - this is an argument that “the ends justify the means.” But that kind of reasoning should make all of us uneasy. Where do we draw the line? If we eliminated a third of the world’s population to stop global warming, would the noble goal make it acceptable? Clearly not.

We can’t ignore the ethical cost of how AI is being developed - especially when it relies on taking other people’s work without permission. Many of today’s most powerful AI systems were trained on vast datasets filled with human-made content: art, writing, music, code, and more. Much of it was used without consent, credit, or compensation. This isn’t conjecture - it’s been thoroughly documented.

That approach isn’t just legally murky - it’s ethically indefensible. We cannot build the future on a foundation of stolen labor and creativity. Artists, writers, musicians, and other creators deserve both recognition and fair compensation. No matter how impactful the tools become, we cannot accept theft as a business model.

https://arstechnica.com/tech-policy/2025/02/meta-torrented-o...


> So the only government that would enforce this is the one that has no chance of competing in this space in the first place (EU)

Mistral waves hello. They're alive and well, and competing well.

Also, while the AI Act and copyright are handled at the EU level, I always get the impression that anyone talking about a "EU government" simply doesn't understand the EU. If you think Germans or Slovaks are rooting for Mistral just because they're European you'd be wrong - they'd be more accepting of it, maybe, due to higher trust in them respecting privacy and related rights, but that's.


> Even if you can enforce this somehow

This is super simple to enforce.

For starters, we only really care about the companies developing big commercial AI products, not the people running said models on their home PCs or anything along those lines.

If a company starts offering a new AI model commercially, you simply send someone to audit it and make sure they can provide proof of consent, have their input data, etc.

In most cases, this should be enough. If there's reason to believe an AI company is actually straight up lying to the authorities, you simply have them re-train their model in a controlled environment.

Oh and no, you don't need cryptographically secure random numbers for AI training and/or operation, so you can easily just save your random seeds along with the input data for perfectly reproducible results.

This isn't an enforcement problem, it's a lobbying problem. Lawmakers are convinced that AI will solve their problems for them when reality is that it's still mostly speculation on someone at some point finding a way to make it profitable.

In reality, training and even running AI is still way too expensive to the companies selling them, even without considering the long-term economic impact of the harmful ways they are trained (artists contribute to GDP directly, open source projects do so indirectly, and free services like wikipedia are an important part of modern society; AI is causing massive costs to all of these)


>If a company starts offering a new AI model commercially, you simply send someone to audit it and make sure they can provide proof of consent, have their input data, etc.

Good luck getting China to agree to this. So you just handicapped your own AI development in comparison to China


Not with that attitude for sure. If the US or / and European union do that, it’s already a big chunk


I wouldn't count on the US anymore, considering today's political climate. But in theory, EU+US could probably make a very compelling argument to China that if all three agree to play nice, nobody gets an advantage because of it, while everyone can benefit from a slower technological development leaving more time to figure out the societal problems.

Ultimately us random people on the internet can't say if China would want that or could be convinced with some other concessions unrelated to AI, but what we can say for sure is that, if China has the will to chill, the west has the negotiating power to match them.


AI poisoning might be the answer, but it needs a business case. Some sort of SaaS that artists can pay for to process their content that will flood and poison the crawlers.


AI poisoning—or rather how artists think AI poisoning works—is largely a myth that doesn't work in practice with these large foundation models.


> I want platforms like soundcloud, youtube, etc. to be required to actually send out an e-mail to all of its users "hey we will be using your content for AI training, please click here to give permission”.

Wouldn’t sites like YouTube already have a license to make money off your content anyway? This might be a little out of date but it notes that even though you own the material you upload to YouTube, by uploading it you grant them a license to make money off it, sub-license it to others for commerical gain, make derivative works etc. IANAL but this suggests to me that if you upload it to YouTube, YouTube can license it to OpenAI without needing to inform you or get additional consent. [0]

[0]: https://www.theguardian.com/money/2012/dec/20/who-owns-conte...


You can tell I'm European, but I think in this case, at the time when consumers accepted these conditions they might not have had any way of understanding the ramifications, so effectively there is no informed consent.

In other words, now that people have had a taste of it and know what they're actually consenting to, companies should have to get renewed consent (positive consent, that is) instead of relying on "you agreed to this before it was even a real thing".

It kind of comes down to the you can't put a "you sell your soul" clause in the terms and conditions of a coffee subscription service mentality: at what point do you simply say "this is obviously in bad faith" and declare it void rather than just say "it's silly, but you signed it".

And I think there's massive cultural differences regarding where that line is drawn.


citing an article from 2012? I don't think much of this kind of training was happening then


I agree - though I also imagine that the T&C's were deliberately broad enough to ensure that they could adapt to what has emerged.


Should an AI model be able to answer the question "which team won the superbowl in 2023" if there are thousands of articles out there containing that information but not a single one of them has been licensed for use by AI?


If you could separate the information from the intellectual property, sure; but if the model is also capable of generating a similar article, that's the point where it starts infringing on the IP of all the authors whose articles were fed into the model.

So in practice, no, it shouldn't. Not because that information itself is bad, but because it probably isn't limited to just that answer.

In summary, I think it is definitely a problem when:

1. The model is trained on a certain type of intellectual property 2. The model is then asked to produce content of the same type 3. The authors of the training data did not consent

And slightly less so, but still questionable when instead:

2. The IP becomes an integral part of the new product

which, arguably, is the case for any and all AI training data; individually you could take any of them out and not much would happen, but remove them all and the entire product is gone.


No.

That's a funny example since broadcasters have to pay a fee to say "The Super Bowl" in the first place. If they don't, they have to use some euphemism like "the big game."

The answer is definitely no. You cannot use something that you don't have a license for unless it belongs to you.


I didn't know that about euphemisms, that's a great little detail - makes this hypothetical question even more interesting!

(For what it's worth to, Claude disagrees and claims that news organizations ARE allowed to use the term Super Bowl, but companies that aren't official sponsors can't use it in their ads. But Claude is not a lawyer so <shrug>)


> please click here to give permission

I want "please mail back this physical form, signed".

It's way too easy with dark-patterns to make people inadvertently click buttons. Or to pretend that people did.


I'm pretty sure soundcloud has already done this. I don't believe they gave an option to opt-out though.


Then they are stealing people's content and imho should be punished for it. It is baffling that we let companies get away with "if you don't opt out you agree" or even "you can't opt out, delete your account or you agree" and often hide that in generic sounding terms & conditions updates.

Again, I think we should require companies to get the user to actively give their consent to these things. Platforms are free to lock or terminate accounts that don't, but they shouldn't be allowed to steal content because someone didn't read an e-mail.


What you need to understand is that what you are frustrated about isn't you being treated unfairly, it's something that was given to you unfairly in the first place now being yanked away again.

That "disproportionate" competition from minorities you are noticing now is an attempt at artificially offsetting a lack of competition from those minorities that might have allowed you to even get to that place in the first place.

Does this suck? I'm sure it does. But it's not as unfair as you're portraying it. And while dressing your complaints up in fancy words like "meritocracy" or "excellence", the core of what you're saying is still just that nothing should be done to correct injustices that have already taken place.

To use a metaphor: Someone gifted you $10k and now the police is telling you that was from a bank robbery, and you don't want to give it back.

And as for outcomes: the assumption with this strategy is that minorities aren't inherently inferior, and therefore excluding them from the talent pool is anti-meritocratic in the long term. Partially suspending meritocracy to correct these demographic problems is a strategy to, in the long term, gain access to as many talented individuals as possible, so that universities can really pick the best, and not just the best among white men.

So it's fairness + long term efficiency vs. short term meritocracy

Universities are picking the first option. And why wouldn't they: they can do the right thing and gain accent to a larger talent pool. None of this is a "social narrative"; it's all just very simple decisions based on what we know reality to look like.

I think the problem has more to do with the inflationary process of universities turning into glorified trade schools and more of more jobs requiring a degree as a proxy for effort and social status rather than because of any actual skills one needs a university to develop.

In a world where everyone needs a degree, universities will streamline the process of producing degree-holding workers, and the inevitable cost will be their ability to produce "excellence", in any meaningful sense. If you don't like this, the solution is to strengthen "lesser" forms of education, so they are enough to qualify people for jobs. Then Universities will go back to being places where people pursue science and inovation rather than a 9-5


I love the irony, but also wonder if businesses will ever realise their treatment of their employees is creating this attitude.


Probably not. I've lost all faith in companies doing anything that helps workers instead of simply extracting value from, and then discarding them.


No, it's the opposite, actually. Friends don't compete, they cooperate. Turning cooperation into competition is how you execute a divide and conquer strategy. If a group is too strong, you convince them that they are each other's true enemy; once they're at each other, you swoop in.

Most "competition" in our modern world is artificial. Try figuring out who benefits from it and where this mentality originates. You'll find that those two tend to overlap :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: