Hacker News new | past | comments | ask | show | jobs | submit login

> Every company already keeps their data in Google Docs

The TOS for the (paid) enterprise products such as Google workspace are totally different from the (free) consumer versions. For example Google can't use the data for AI training.




TOS of OpenAI API (which tools like this use) do not allow for model training on the data either. You might be confusing their API with ChatGPT, which has a different policy.


The important point being, with Google, Notion, Slack, Confluence, etc. your company has an actual contract with the vendor, properly signed, with provisions about data handling that your company (and unlike you as an individual) can effectively enforce. There's an actual relationship created here, with benefits and losses flowing both ways.

The Terms of Service? They're worth less than it costs to print them out.

Case in point: right now, Microsoft is repackaging OpenAI models on their Azure platform and raking it in - the main value proposition here is literally just that it's "OpenAI, but with proper contract and an SLA". But companies happily pay up, because that's what makes the difference between "reliable and safe to use at work" vs. "violating internal and external data safety standards, and in some cases plain up illegal".


So if the product from OP used Azure OpenAI, it would be okay? You say "companies happily pay up", but the pricing is exactly the same (source: my company is paying for both APIs).

It's been quite clear for some time that, between OAI and MS, they very neatly split their market: OAI handles the early development and direct customers, and MS handles enterprises. It would require OAI to be a much bigger company than it is right now to properly handle enterprises, and MS already has all that infrastructure (legal, support, etc.). Seems like a sensible setup to me, and I don't see the need for enterprises to run open source models themselves (in this context - of course I see the value in all the other respects about lock-in and specialization), especially if they are already on Azure.


IANAL, but I read the openai api TOS earlier today, and they keep data for up to 30 days for "review" and multiple people can get access to it. If I had confidential data I would not send it to them. Microsoft on the other hand seems to have a option where absolutely no data is stored for their openai service.


You use "review" in quotes, but I don't see that word used in reference to the 30 day policy. This is what I see:

> Any data sent through the API will be retained for abuse and misuse monitoring purposes for a maximum of 30 days, after which it will be deleted (unless otherwise required by law). [0]

The word "review" implies humans reading your data, but this wording only says it's retained for "monitoring". That could mean other things.

Or are you seeing the "review" wording somewhere else?

[0] https://openai.com/policies/api-data-usage-policies


It is true that the word "review" was my own, it was my interpretation of this paragraph

>OpenAI retains API data for 30 days for abuse and misuse monitoring purposes. A limited number of authorized OpenAI employees, as well as specialized third-party contractors that are subject to confidentiality and security obligations, can access this data solely to investigate and verify suspected abuse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: