Hacker News new | past | comments | ask | show | jobs | submit login

They can (and do) revoke API access from bad guys. They can't do that to downloaded models. Look, I don't like what OpenAI does, but "API access, but no model download" makes sense if you are worried about misuses.



Every company out there says it will "revoke API access for misuse", but do they have transparency reports? Who do they even consider bad guys and what do they consider as misuse?

I would be totally on their side if their reasoning was that they dont publish models to compete with FAANG more efficiently and get more income for their research, but this moral reasoning just sounds completely fake because bad actors do have funding to train their own models.


OpenAI published "Lessons Learned on Language Model Safety and Misuse" in March. https://openai.com/blog/language-model-safety-and-misuse/ It also promised "forthcoming publication".

Examples of "real cases of misuse encountered in the wild" include "spam promotions for dubious medical products and roleplaying of racist fantasies".

Yes, some bad actors can train their own models, but OpenAI can't do much about that either way. It is doubtful whether spam promoters of dubious medical products can, at least for a while.


It would be better for misuse to be criminalized and taken care of by national governments, rather than leave it to for-profit companies to decide what is or isn't "misuse".

Personally, I think using AI to manufacture advertisements on demand is misuse... but will Google agree with me?


National governments are often the criminals themselves, or their partner.


Bad actors still can get access to such models. It even makes them more dangerous than it would if everyone had access to them.

Here's an alternative: progressively release better and better models (like 3B params, 10B, 50B, 100B) and let people figure out the best way to fight against bad actors using them.


> It even makes them more dangerous than it would if everyone had access to them.

This is the sort of argument that proves guns would be less dangerous if everyone had access to them.


"An armed society is a polite society" - Robert Heinlein


He was an author of fiction. They usually write a lot of stuff that is on some deeper level "true", even though it is on the surface fictional... And a lot of stuff that isn't.

Or sometimes both, because it's only part of the truth. Maybe the complete version should be: "An armed society is a 'polite' society, but with very frequent killings and not-infrequent massacres." I think I prefer living in a less "polite" society.


"It even makes them more dangerous..." needs to be demonstrated, not asserted.


>if you are worried about misuses

why is morality into this? is this the same discussion of car manufacturers not selling cars to certain people because they are worried about misuse?


Automotive companies, in fact, have product liability. It's about liability, not morality.


when you release a project into the wild under a permissive license, aren't you essentially washing yourself from any "liability" ?

> MIT " IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."

don't commercial licenses have same/similar wording so what liability are you talking about?


So they do that because they're doing it for free. Otherwise they couldn't be generous with their work--those licenses are about permitting the generous intent.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: