Hacker News new | past | comments | ask | show | jobs | submit login
How Zoom’s terms of service and practices apply to AI features (zoom.us)
336 points by chrononaut on Aug 7, 2023 | hide | past | favorite | 179 comments



This is a nice statement, but the TOS is the important part, not what this marketing piece says.

> You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content.

> (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof


Yeah, if the TOS says one thing, and a blogpost pinky-promises another, only one of those two actually counts as far as I'm concerned.


I actually consider the act of doublespeak potentially insinuating ill intent.

What you and what you say need to be consistent to preserve user trust and then being inconsistent shows mismanagement by senior leadership or even potentially intent to deceive or spin the situation while still implementing the policy. It’s the PR classic do one thing say another.

Edit: Oh, and then this hits almost at the same time…

https://www.sfgate.com/tech/article/zoom-return-to-office-an...


>I actually consider the act of doublespeak potentially insinuating ill intent

I agree with this sentiment and it feels like a heuristic at this point.

I think it comes from a decade of watching when corporate officers get caught red handed then try and denial of service the bad press with their jingoistic pablum.


> I agree with this sentiment and it feels like a heuristic at this point.

Well I might just take that heuristic and do some basic sentiment analysis to rank companies on their doublespeak.


If you do I’d love to see the results


This doublespeak should result in huge fines, but there's lobbying instead


And corrupt officials that accept the lobbying, which is the bad bit.


"It's not corruption, it's lobbying", the "it's not a bug, it's a feature" of politics.


Well, the practice of being able to take your case to the government is a great one. The government - already paid for with free money from non-government people working - is the one letting itself be corruptable.


The AI part isn't the bad part. It's the "use for marketing", like gMail.

One implication is that lawyers can no longer use Zoom for anything which is attorney-client privileged.


How does this add up with E2EE?

They claim they can’t read anything passing through the server. Is there some other way they’ll get access?

https://support.zoom.us/hc/en-us/articles/360048660871-End-t....


e2ee is not the default, and is incompatible with some of their other features like "cloud recordings".

they also got caught being malicious and/or dumb in the past (https://www.businessinsider.com/china-zoom-data-2020-4) so there's no reason to bother with them now.


E2EE is not the default mode for Zoom.


I have not had a chance to read up on this yet but does zoom not have a paid version or corporate version that would not follow under these same TOS? If not it seems crazy like a shot in the foot because lots of businesses use zoom and I know most want or are required to use privacy preserving programs.


Speaking of pinky promises:

> We will not use ... protected health information, to train our artificial intelligence models without your consent.

> We routinely enter into ... legally required business associate agreements (BAA) with our healthcare customers. Our practices and handling of ... protected healthcare data are controlled by these separate terms and applicable laws.

To my understanding there is nothing in the separate terms (BAA) or applicable laws (HIPAA) that actually guarantees this.

I don't want to assume malice but if in good faith I would have expected an updated BAA with an explicit declaration regarding data access and disclosure in a legally-binding fashion rather than a promissory blogpost vaguely referencing laws that don't themselves inherently restrict the use of PHI for training by Zoom.

It would really only require a single term.


They have added:

> Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.


But the TOS says:

> You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, … [rest already quoted several times in the thread]

so that promise to not do it without consent is meaningless as they have consent from anyone who has agreed to the ToS which anyone using the service/product has done.


The BAA (https://explore.zoom.us/docs/en-us/baa.html) looks the same. Did you mean to the TOS (which is subject to change as has now happened twice)?

The BAA still states: Zoom shall not Use and/or Disclose the Protected Health Information except as otherwise limited in this Agreement ... for the proper management and administration of Zoom ... Zoom will only use the minimum necessary Protected Health information necessary for the proper management and administration of Zoom’s business specific purposes

As discussed in my comments on yesterday's post "proper management and administration" is vague language copied from HHS and can be construed as improving products as described in a legal analysis I quoted. I would also hazard a guess that a provider signing this agreement could be construed to have implied consent.

Nevertheless, it would not be hard to explicitly state that this does not include training models in the only truly legally binding agreement at play. An explicit declaration was also recommended in said legal analysis.


For me, that BAA doc flashes up and immediately redirects me to the homepage.


Strange, only seems to be happening to me on mobile.

This should work: https://web.archive.org/web/20230808072418/https://explore.z...


Doesn't the TOS already count as consent?


That is where I am stuck.

Until the TOS clearly says otherwise, as far as I can see, the TOS at least implies this:

1. We will not use your data to train AI without your consent.

2. By accepting these TOS, you give your consent to everything in this long list (which includes training AI).


In Europe/UK, it is established law that agreeing to TOS is not consent for everything in it, especially when referring to the use of personal data for things that aren't strictly necessary to do what the user has asked, and also especially given that in order for it to be consent freely given then there must be no difference in service depending on whether consent is given or not.

However, many companies reckon they'll get away with it, the enforcement is not universal and rapid, and I don't trust Zoom as far as I can throw it on this particular score.


I wonder if a deceptive marketing post explaining a privacy policy change could be considered material if there was a lawsuit.


I wonder if you could found out when they violate their own terms.


Exactly the point of my last comment no one will ever use this service again. Taking a hard NO on this forever.


The TOS has been updated to state the following:

> Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.


Apparently the TOS can be edited at any time to say anything without notice.

It’s worth mentioning that per this agreement they can still do almost anything else with that data. They could put your face up on a billboard if they wanted to.

I’m out. I was a paying user. Can’t run fast enough from ever doing business with them again.


Per the agreement using the service can probably be considered consent. Ie “we won’t use your data without your consent” translates to legal code “if you accept the TOS which you do if you use the app, then you’ve given consent”


That argument wouldn't fly in a court of law in the EU/UK, but many many companies try it on anyway.


Edit: they added the line back again -_-

They just made another edit and removed the line.

Here's the edit history going all the way back to March:

- 4/1 https://www.diffchecker.com/dCuVSMnp/

- 7/1 https://www.diffchecker.com/Zny4Rjqw/

- 8/7 https://www.diffchecker.com/ER0RHSdb/

- 8/8 https://www.diffchecker.com/RLiqgAaA/


Unfortunately it's like Gmail. Even if I'm not using them, enough other places do that it's not feasible to totally avoid them without adding complications to my life. Those complications might be worth it to you, but eg my therapist's office uses Zoom for the backend of their app. You'd never know it unless you're the kind of person to dig into that.


> Apparently the TOS can be edited at any time to say anything without notice.

Yes, as with most terms of service. It's one of the things that makes terms of service statements unreliable.


This is not true. At least in the US you are required to notify customers when you change the terms of service, and as far as I know in the EU as well.


I'm well aware of the US court ruling in the early 2000s that declared the users must be notified of ToS changes. And yet, companies frequently change ToS without providing such notification in a way that customers will actually notice anyway, so it doesn't seem to matter much.


How about analyze all the meetings from company x in order to insider trade or perform some other kind of corporate sabatoge.


> or perform some other kind of corporate sabatoge.

Webex seems to be the "corporate" video conference service, when secrets are a concern, from my experience.


Which provider will you be moving to, and have you checked that their ToS are more acceptable?


If you don't need the "advanced" zoom features, I can highly recommend Jitsi. Free public service and you can self-host if you need it. We have been running a fully remote company with 90% of meetings via Jitsi since COVID with great success. I recommend Chrome over Firefox though, as FF's WebRTC support is behind Google's.


Doesn’t using a Google product taint the solution, if privacy is a major concern? Also, WebRTC leaks ip addresses when using a VPN.

What is the secure way to video conference? Webex? FaceTime offers end to end encryption, but can not easily share non-mac os screens.

Articles like this sure make me like Apple sometimes

https://9to5mac.com/2023/07/20/apple-imessage-facetime-remov...


Yeah, though I don't really understand your privacy/security model. If you are worried about being targeted directly by Google or some three letter agency, you certainly should not use any of this technology. Instead, you need to go back to the basics, to something running on a physically segregated network, with a much smaller attack surface area. If you are just worried about being dragnet tracked, you are way better off with Jitsi than Zoom even if you end up using Chromium to connect to the session.


Jitsi but using Chrome doesn't fix anything privacy-wise.


Well for me it does, at least to some extent, as I use Chromium built by Arch Linux and run a new clean profile for each call. I highly doubt Google is collecting my actual data (not metadata), like AV streams or even web history, in this configuration.



This is the question! Is there anything anyone would recommend on security grounds?

Enterprise may resonate with something with Signal level e2ee.

Has anyone tried Element IO, as an example, in a commercial setting?

Asking for a friend.



Most "free markets" in the US -- certainly all that matter -- are dominated by 2 or 3 players, and when they all agree to do the same sorts of anti-consumer things, there's nowhere to go. Even if Teams or some other small player has better ToS now, who's to say they won't do the same thing tomorrow? Or do it anyway, without telling anyone?


> without your consent.

†but we'll prompt you an overly long privacy policy including such consent whose acceptation is just a checkbox you tick the first time your join a call without even paying attention (nor choice)


This seems to be a pretty big thing. Zoom seems to have been adopted by a lot of government processes.

How does this apply for court hearings, council meetings, etc…


ZoomGov likely has a different ToS


"...will not use...to train..." (emphasis mine)

They'll do inference all day long, but not train without consent. Only being slightly paranoid here, but they could still analyze all of the audio for nefarious reasons (insider trading, identifying monetizable medical information from doctor's on Zoom, etc). Think of the marketing data they could generate for B2B products because they get to "listen" and "watch" every single meeting at a huge swath of companies. They'll know whether people gripe more about Jira than Asana or Azure Devops, and what they complain about.


This is really important, and I would further emphasize the word our. Zoom doesn't need permission to "train" their own in-house artificial intelligence model when it can just transmit/sublicense that data to someone else who will train a model, or to an internal team who will use it (perhaps in few-shot prompts at scale, which is not technically training a model!) for "consulting services" in the broadest sense that that team can imagine.

I generally feel like the general slowdown of capital availability in our industry will lead/is leading to companies doing a lot more desperate things with data than they've ever done before. If a management team doesn't think they'll survive a bad couple of quarters (or that they won't hit performance cliffs that let them keep their jobs or bonuses), all of a sudden there's less weight placed on the long-term trust of customers and more on "what can we do that is permissible by our contract language, even if we lose some customers because of it." That's the moment when a slippery ethical slope comes into play for previously trustworthy companies. So any expansion of a TOS in today's age should be evaluated closely.


> If a management team doesn't think they'll survive a bad couple of quarters (or that they won't hit performance cliffs that let them keep their jobs or bonuses)

Agreed, and these kinds of short-term incentives are one of the problems with American companies. On the flip side...

Japanese companies think about products in decades -- the product line has to make money 10 years from now.

Some old European brands think about their brand in centuries -- this product made today has to be made with a process and materials that will make people in 100 years think that we made our products at the highest quality that was available to us at the time.


This is overly broad generalization that does not hold up to scrutiny.

Konami vs Kojima and any of the DieselGate companies come to mind.


Got any data to back this up or are you just spouting racist tropes?


Making generalized claims about companies is racist now? I must've missed the memo.

I guess it makes sense. Companies are people, after all


They're not making claims about companies, they're making claims about cultures (American, European, Japanese).


No.

> American companies.

> Japanese companies

> Some old European brands

Unless the parent comment was edited, of course.


But don’t overlook how easy it is for them to bury obtaining consent in a 40 page agreement. It almost makes the things you highlighted moot.


A very powerful example of the difference between the words “will” and “shall.”

Hats off to zoom for the free contract drafting lesson!

[edit: thanks to HN commenter lolinder for the actual lesson].


> You can use "will" to create a promise--a contractual obligation. See Bryan A. Garner, A Dictionary of Modern Legal Usage 941-942 (2d ed., Oxford U. Press 1995). When used in this way, "will" is not merely stating a future event, it is creating a promise to perform:

> > Landlord will clean and maintain all common areas.

> In most basic contracts, I recommend using "will" to create obligations, as long as you are careful to be sure any given usage can't be read as merely describing future events. I'm generally against "shall" because it is harder to use correctly and it is archaic.

https://law.utexas.edu/faculty/wschiess/legalwriting/2005/05...


So, I get that you’re downvoting and contradicting, but are you sure we don’t agree? Let’s put it this way: I was observing precisely what you copied and pasted: this is a perfectly valid way to write a contract if you subsequently want to be able to argue either side.

Was zoom careful to be sure any usage can’t be read as merely describing future events? Will ambiguity exist until this agreement is tested ?


Given that they use "Zoom will" 21 times in the document to clearly refer to their obligations—including 4 times in the paragraph entitled "10.5 Our Obligations Over Your Customer Content"—I seriously doubt they're counting on or will get points for any ambiguity.

Meanwhile not once do they use "Zoom shall". It's pretty clearly just a stylistic choice and not anything sneaky.

Edit: They even use "will" in the all-important phrase "you will pay Zoom". Surely you don't think they meant to be sneaky in that usage, and that is merely meant as a prediction of future events?


I stand corrected. I suppose reading the contract instead of snarking might have allowed me to avoid the embarrassment. Thank you.


Thanks for responding graciously!


When in Rome; you as well.


That still allows them to broadcast your meeting in a feature film of their choice. No, this is insane. The only reasonable option here is (1) end-to-end encryption or (2) ephemeral storage purely for the provision of the service.


Interesting that they limited the clause to only audio, video and chat. Looking at their definition of Customer Content it means that they can use, for example, documents you shared, transcripts, a very nebulous ‘outputs’ and visual displays.

So they can create a transcript of the conversation and train with it. Or train on any document you may have shared during a Zoom meeting.

I woukd have preferred the exception - if that was the intent - to enumerate the components of the Customer Content that they want to use for training.

10.1 Customer Content. You or your End Users may provide, upload, or originate data, content, files, documents, or other materials (collectively, “Customer Input”) in accessing or using the Services or Software, and Zoom may provide, create, or make available to you, in its sole discretion or as part of the Services, certain derivatives, transcripts, analytics, outputs, visual displays, or data sets resulting from the Customer Input (together with Customer Input, “Customer Content”);


It's gone. They removed this part.

https://www.diffchecker.com/RLiqgAaA/


Aaaand now it's back


Does that mean they could sell the data and let someone else train their AI model without consent?


> redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create

This is very Technologic


It's missing "pickle" and "ferment", but I guess there's not enough culinary influence at Zoom HQ.


They should use more Python.


They're also allowed to use it twice. Also, they can distribute it, and then redistribute it.


> You agree to grant and hereby grant Zoom [...] license and all other rights required or necessary to [...] create derivative works [...]

> [...] for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof [...]

Those two clauses, coupled with the current murky state of AI-from-copyrighted-material, should make everyone run screaming from Zoom as a product that can be entrusted with confidential information.


If my hours of watching legal eagle on YouTube qualifies me to give legal advice for a country I don’t even live in then yes, that marketing statement can have very real influence on how a court is willing to interpret the agreement between Zoom and its users.


The terms have since been updated. It's still an interesting lesson in spin when the change they made looks like this https://diffcheck.io/y-_v1LkBYyk4WBueHX


It's very hard to read that on mobile. What changed?


Funny how people have been granting pretty much the same batch of rights to Microsoft for decades when they used Skype, and nobody minded.

In addition Skype's ToS granted MS a licence to any and all IP you might discuss during a Skype call.[1] I wonder why no businesses were bothered by that...?

[1] ...decades ago, I don't know how it reads now, can't be arsed to check.


Times have changed. AI invokes existential dread.


It is perhaps a sign of how pervasive the assumption that a subscriber is an asset rather than a stakeholder that they have thought it is reasonable for a commercial communication service to claim copyright on private communications of their subscribers.

Can you imagine the response to telephone company saying they can use your voicemail messages for their own purposes.


It is illuminating to do a search for the word "consent" in the document, considering they say that they will not use things without it.

Seems like it might be worth them including, IANAL. Otherwise can't they just change it in the website UI...? They don't promise any particular process for acquiring consent, but sure declare you give it to them for many many other things.


They have amended the TOS with this statement:

> Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.


The marketing piece explains why you might want them to let them have these rights. The TOS are what you're giving up in exchange.


We can drop them from all of our portfolio companies and I personally can as well.

But what are the best alternatives at the moment?

Zoom is very popular…


I find google meet pretty good. The zoom desktop client kills my mac, but the web client doesn't have great performance. Meet performs better in the browser for me


Every time I try to use Meet in an up to date Google Chrome MacBook, it makes my entire browser stutter. And it's much worse than the Zoom desktop client when it comes to audio cancelling and visual quality


If you use Slack their video calling is pretty good these days, at least for internal calls.


> but the TOS is the important part,

You think a TOS that's biased towards the company, or the customer, has any legal effect on a Chinese domestic corporation that's subject to the laws and regulations of the Ministry of State Security? Really?


The main point is that "You agree" is the customer, not the end user.

Which means that in the case where Zoom is provided to you by your employer, they claim that the employer consent is just what matters. Once more "Fuck GDPR".


Zoom was proved as being dishonest and untrustworthy years ago (https://citizenlab.ca/2020/04/move-fast-roll-your-own-crypto...) but companies never cared about the privacy of the people they forced to use the software and all the security problems and leaks that followed still didn't discourage people from using it. I doubt their mining of personal data for AI will stop people from using it either.

Although there are a ton of alternatives out there they are all "too hard" or something, so since Zoom mostly works OK most of the time and is dead simple to use it will continue to win out over everything else.

My position on Zoom hasn't changed since 2020: Anyone using Zoom will continue to get exactly what they deserve.


> they are all "too hard" or something

Users vote with their feet based on cost and UX. While intertia is certainly a thing, there's a reason Zoom got a foothold while others didn't. The ability to send out links and having people join the meeting without creating accounts or manually installing clients first is huge in most real-world scenarios. Could you do that with... Teams? Skype? Hangouts if they weren't gmail users? Do those people know anyone with the knowledge and gumption to host something?

From the beginning of my involvement in FOSS like 25 years ago, developers have griped about non-technical users being intimidated, or even just really annoyed by UX resistance that we consider trivial. That's the primary reasons open source alternatives are alternatives rather than the standard in user-facing software.


[The ability to send out links and having people join the meeting without creating accounts or manually installing clients first is huge in most real-world scenarios ]

this is how it used to be, until HTTPS and cloudflare-like hosting solutions, were guzzled back like electric kool-aid. all you really needed was an IP and perhaps a port number if endpoint was behind NAT.


I worked in technical support when techniques like that were de rigeur. Your average adult would be exponentially less likely to navigate that process successfully than a zoom invite. Sure, it's more simple from a technical perspective, but not even close to as simple from a user flow standpoint.


I notice in these comments it’s really hard to drive home User Experience and friction (sometimes!) and what seems easy to you (just an IP address!) is really like talking magic to your average user.


Yeah, and that makes sense to some extent. We often understand that it's hard to think like someone without our knowledge when writing docs for other developers, and that's more extreme with end users. The hubris is what kills me, though. Empathy is hard, but developers tend to be so arrogant and dismissive, sometimes even disgusted by others not knowing what they know. It's kind of funny: on one hand, many people in tech-- developers, ops, IT, etc.-- have an intense feeling of superiority based on their knowledge, but act like people without that knowledge are imbeciles.

Even worse, it tends to go hand-in-hand with astonishing overconfidence in their understanding of other fields. I've had two other primary careers-- designer, and chef-- and I can't count how many developers have "explained" parts of those fields to me despite knowing I'm a subject matter expert. Like their astonishing intellect and that related Metafilter thread they skimmed makes them authoritative. I get supernova-intensity cringe when I hear other developers shoot off Dunning-Kruger-esque oversimplifications of other fields' genuinely hard problems.

When I hear developers talk about the arrogance of designers, I can't help but laugh... then maybe cry. Many seem genuinely aggrieved that interface designers have more input on the interface design than they do.


Those in HN even today continue to echo the famous Dropbox comment with gusto.


Indeed


You can do that in Google Meet. I just checked in an incognito window, the only thing I needed to enter to join a meeting was a name.

I never really understood why people like Zoom's UX, I find it unintuitive and awkward.


Smart update. At the beginning of the pandemic-- the most critical time for Zoom's ramp-up-- Google Meet was only available to Google Workspace users. Their userbase is currently about half of Zoom's, which is a pretty impressive ramp-up. Looks like users have voted with their feet to put it in that position, but that's where intertia does actually make a difference. People aren't constantly evaluating all of their options to see which one works best in case they should switch; they use whatever solution works best when they're evaluating solutions and then stick with that until there's a problem or someone informs them of something better with minimal switching cost.


Let's also not forget that Zoom's internal processes and engineering standards for security were so poor that Apple implemented the rarely-used malware removal tool, because they screwed up their client software so badly:

https://techcrunch.com/2019/07/10/apple-silent-update-zoom-a...

https://www.theverge.com/2019/7/10/20689644/apple-zoom-web-s...

https://www.macrumors.com/2019/07/10/apple-update-remove-zoo...


Well, there's a couple of reasons that people use it:

1) Until recently, Zoom's video/audio quality knocked everyone else's into a cocked hat. I don't think that's the case, anymore. Looks like a lot of folks got off their butts, and improved their quality, but I haven't seen this mentioned anywhere, by anyone.

2) Everyone else is using it.

#2 is a biggie. Monopoly inertia is pretty hard to overcome, for people not in the tech industry (we'll change on a whim).

Zoom is not easy to use. Its settings are a mess, but everyone is used to dealing with the Zoom pain, and don't want to switch.

We can be remarkably cavalier in dismissing non-tech folks, but I learned to stop doing that, many years ago. We're not the only smart people in the world.

People (in general) don't like getting sidetracked by their tools. They want to get a job done, and how they get it done is not irrelevant, but not that important to them. They develop and refine a workflow, which is usually heavily informed by their choice of tools, and that "wears a groove." They don't want to switch grooves; even if they are not enjoying their tool.

Most tech folks, on the other hand love tools. I had an employee that would stop his main project, and design a massive subsystem, just to make a simple command-line process a few seconds shorter. I had to keep on my toes. He was the best engineer I've ever worked with, but it was a chore to keep him focused.

Non-tech types are seldom like that, and we can sometimes miss it.

These are the folks that use our products, and we don't actually gain anything by disrespecting them, even when they really piss us off.

TL;DR: Want people to stop using Zoom? Produce something better, and make it something that non-tech folks will love.

That means easy to use, forget-about-it UX, and extremely high quality.


Not to mention other super-important features like background blurring. It has very recently become available with in-browser solutions, but still not exactly on par, and with less options (e.g. no green screen support). That alone justifies using Zoom.


I find that the issue with "Produce something better, and make it something that non-tech folks will love." is just that you get the "Twitter to Threads" sort of thing where you still have the centralized / walled garden / new boss same as the old boss problem.

Or you inherently can't make it "forget about it UX and extremely high quality" as most non techies define it. Because you have the issue that even if a company self hosts a meeting tool, they likely can't get the backbone connections Zoom etc can get. They at least need someone to use a URL to get there. It can be made mostly simple, but then you're back to some company running it - works for corporate use maybe, not for your home user. Even Signal lags compared to Zoom. And people really dislike Signal's phone number requirement, but it's what makes it somewhat possible to route connections for users.

What's a system that a home person could use that's not going to get them routing through one companies servers, but is actually simple enough to use?

The place where I do get somewhat exasperated as a techie is that the equivalent of asking for a phone number or address in any program that isn't an e-mail website is seen as "too hard". This makes pretty much any privacy respecting design impossible to scale beyond nerds.


What video platforms do you think improved their audio quality to be comparable to Zoom? Meet certainly hasn't, their noise cancelling is awful and isn't going to get better as long as they're stuck in the browser. (I'm fine with browser performance 99% of the time, but when I'm spending hours a day on calls I'm going to prefer Zoom and native-quality audio processing)


For some reason noise cancellation in Meet is gated by the tier of Google Workspace you’re on, it’s available on the $12/mo/user plan, but not on the $6 one.

This has always struck me as a weird business/product choice, since I imagine most users simply don’t know about this, assume Meet is just bad, and use other products, rather than having the idea to upgrade for better audio quality.


I never liked Meet.

I was thinking of Bluejeans and Teams. GoToMeeting seems to have improved a lot, as well. WebEx is doing much better, but I have only used Bluejeans and Teams, in the last couple of years.


WebEx certainly is.


Zoom's lawyers are trying to pull a fast one with these revised Terms. The new sentence on user consent being required to train AIs applies only to "Customer Content," not "Service Generated Data."

In sec. 10.4, Zoom says "... Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent."

Customer Content is defined in 10.1 and is broadly worded. But the first sentence of sec. 10.2 clearly states that "Customer Content" does NOT include "Service Generated Data."

Therein lies the rub. "Service Generated Data" = "any telemetry data, product usage data, diagnostic data, and similar content or data that Zoom collects or generates in connection with your or your End Users’ use of the Services ...." (sec. 10.2).

Zoom is allowed to use Service Generated Data for any purpose (sec. 10.2) because it is not "Customer Content."

This "clarification" does nothing meaningful to assuage the serious data privacy concerns posed by Zoom's use of captured user video content.


Drawing from my experience in the eDiscovery field, I want to emphasize that there's no necessity for video or audio content. Rather, a tool can be developed to convert all audio into a text layer. This text layer can be extracted from the local files sourced from platforms like Zoom. Subsequently, AI/ML can be employed to process and analyze this data, providing valuable insights that compromises companies' intellectual property and sensitive information.


are Embeddings (text-emb, visual-emb, etc) of Customer Content service generated data?

This might be a loophole Zoom is trying to use - while they technically not using customer data (Zoom client not sending video stream to train AI), but zoom client can process data locally and send only embeddings (numeric vectors without ties to customer PII data) and it still will be customer data


Called it a few days ago: https://news.ycombinator.com/item?id=37022827

It's baffling how many people in previous threads thought a company that gets most of its money from enterprise/business clients, will burn all their reputation by surreptitiously using client data to train their AI.


Given the company’s history, it doesn’t seem very baffling at all…

> Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems [0]

> Mac update nukes dangerous webserver installed by Zoom [1]

> The 'S' in Zoom, Stands for Security - uncovering (local) security flaws in Zoom's macOS client [2]

[0] https://arstechnica.com/tech-policy/2021/08/zoom-to-pay-85m-...

[1] https://arstechnica.com/information-technology/2019/07/silen...

[2] https://objective-see.org/blog/blog_0x56.html


That seems an intentional business decision where expected value of fine < perceived benefit. $85M is little


$85M may be nothing to Apple, Facebook, or Google, but to Zoom it's a substantial amount. Their quarterly net income for Q1 2023 was only 15.4M.

(Even if revenue was much higher. Revenue doesn't tell you anything about how well a company can take a financial hit)


Aren't those fines inflated due to the companies having a large revenue/to make an example?


I wish, the bigger the company, the smaller the fines (proportionally) tend to be. Like slapping Google on the wrist with a $125m fine. "oh no, an amount we can make back in about an hour, whatever shall we do!"


>Like slapping Google on the wrist with a $125m fine. "oh no, an amount we can make back in about an hour, whatever shall we do!"

If the specific misconduct they got caught for netted them $x, and they got fined for $5x, who cares how much % of their global revenue is? That specific crime was still a net negative for them. I'm not sure why conglomerates should be punished more harshly just because they have more revenue overall.


There are no crimes being committed, they've not been taken to court, judge against the US criminal code, and found guilty, after which the punitive damages are what they get fined. They merely violated the law, and were fined over that. The entire amount charged was the fine.

As for "who cares about %": every one who understands that fines that cost a company nothing, do nothing, all they say is "it'll cost you a trivial amount more to do this", turning what should be an instrument to rein in companies into simple monetary transaction that just goes on the books as an entirely expected and affordable expense.

It should be a crime, and they should have been found guilty in court over that, and the fine should be such that no matter your company's size, you can't risk running afoul of the law repeatedly. But it absolutely isn't.


I get where you're coming from, but corporations should be fined massively for bad behavior to act as a deterrent.

Personally I think that C levels should automatically be disbarred if the corporation is found guilty of criminality as that puts responsibility on the people with the power to prevent it.


The issue that government has is that if they’re too harsh the companies move a lot of employment/operations out of their country.


Good. If you cannot afford the price of doing business illegally, close up shop or leave. And then you will be charged as foreign company when you try to sneak your way back into doing business in the country you left with the much more fun threat of being declared illegal and your products and/or services pulled from the market.

While you thought you presented an argument against hefty fines, you actually gave the perfect reason for why they should be hefty. If illegal practices are affordable, they're not illegal. They're just the price of doing business. So make them hurt.


Yes, who could imagine such a thing from a company that leaked personal data without consent (https://www.bbc.com/news/business-58050391) and lied about end-to-end encryption for 5 years (https://www.ftc.gov/news-events/news/press-releases/2020/11/...).


Always wise to remember Hanlon's razor: "Never ascribe to malice that which is adequately explained by incompetence"

Occam's razor also applies here.


I'm done with Hanlon and his Razor. It's useless.

I now use Hanlon's Shaving Brush. Its a broad brush that I use to paint every sketchy move businesses make. "Is it malice? Or is it incompetence that merely looks like malice?". I don't care! I'll assume malice unless otherwise shown.

It's not my job to try and find out how evil shit was done accidentally. It doesn't matter if they "oopsied" into selling a firehose of my data to a "trusted partner" to analyze to death. Nobody actually gives a shit at these companies, so I need to treat them all as if they're malicious. If the underlying cause was a bit of incompetence a few years ago, that does nothing for me when I'm discovering the fuckery.


I think Hanlon's razor isn't true often enough to consider it a valid rule of thumb.

But, really, does it matter whether the bad thing is caused by incompetence or malice outside of a court of law? The bad thing happens either way.


Never ascribe to a hasty overgeneralisation that which is adequately explainable by observation and evidence.

I think attributing everything to incompetence vastly underrepresents intent. Maybe not all bad acts are malice, but too many are attributed to incompetence. Maybe it is not malice, but it can still be intentional actions against or indifferent to your interests.


Way over quoted, and it didn't represent the incentives companies have to screw over their own customers in the pursuit of profits.

Maybe it's both: malice to kick off the effort and incompetence because they got found out.


There are two aspects to that saying.

The word adequately, and the fact it was made when presuming good faith was more reasonable.

These days it's better to assume everything is theft, fraud, or marketing.


I'm sorry, I didn't maliciously stab the guy, I was just really, really, really incompetent with handling this axe.


It doesn't apply in all situations, clearly.


I'd have thought stabbing someone with an axe would require extreme competence.


Let's please not pretend like philosophical razors are anything other than rhetorical devices. There's exactly zero data to back any of them up and it wouldn't matter if there was since each case is unique.

There is however research (that aligns with a lot of people's experience) to suggest psychopaths and sociopaths are very over represented in leadership:

https://www.sakkyndig.com/psykologi/artvit/babiak2010.pdf


I think as a rhetorical device it's good. Which is more likely: 1) Company has actively decided to burn all good will by being evil (we will use your private meeting content to train our ai without any way to opt out) 2) Company is dumb in their terms of service

The HN commenters tend to assume #1 when it comes to big companies, while more likely it's #2. The razors capture this situation well.


> Section 10.2 covers that there is certain information about how our customers in the aggregate use our product — telemetry, diagnostic data, etc. This is commonly known as service generated data. We wanted to be transparent that we consider this to be our data so that we can use service generated data to make the user experience better for everyone on our platform.

Well, I consider that to be my data and actually it is since I canceled our company's Zoom account when they adjusted their TOS. I'll take my data elsewhere.


Just out of curiosity—did you cancel the account in April when they changed the terms, or in August when people finally noticed?


August, when I saw the changes. I have to admit I'm not babysitting all of our vendor's TOS so am glad these things get surfaced on HN.


It would be great if there was a tool that could compare TOS of many services and tell me what changed since the last version. Basically diff the two.

There’s already tosdr.org but I’m not sure they have that feature.



You don't use any service that has analytics or internal error reporting? So no AWS, GCP, Cloudflare, MS Office, etc?


I can't help but use AWS but I definitely don't use that other stuff. That it's laden with spyware is one of the main reasons.

Especially for video communication, I'm not going to let some 3rd party spy on me and my business if I don't have to.


I guess you'll have a hard time trying to find a service, video calls and else, that doesn't have terms similar to these.


It's definitely a challenge, but another good thing about HN is people link alternatives in threads like this. I'm already checking out Jitsi (mentioned up thread) and it looks awesome. It's even open source:

https://jitsi.org/



> As part of our commitment to transparency and user control

Bold claim for a company that already lost a class action for deliberately lying to its users.


It's like a drug addict that only went to rehab to avoid jail and is not really wanting to stop. That next relapse is just right there waiting for them.


Except the drug addict has an actual excuse for getting back to it: addiction is a medical condition, whereas here it's just lack of any kind of decency.


I’m honestly surprised there’s not a medically recognized condition for being someone that operates like this company (and others) of having no moral fiber and is only about making the dollars now with shady tactics.


I think it's called sociopathy.


Nextcloud Talk has no TOS AFAIK. It is FOSS and self-hosted. Nextcloud AI tools run on your instance with the exception of the optional OpenAI plugin app. They are developing further FOSS AI models to replace those. https://www.youtube.com/watch?v=14gSiyAl9Fw


I love Nextcloud. What will eventually happen is that Zoom (and other services in this model) will cause some problem that is so catastrophic that companies realize the true risk of being so deeply entrenched into these toxic one-sided relationships and then will begin to adopt more self-hosted tools.


I wonder if they allow opting-out. My last company was in the healthcare space, and we used Zoom for all internal communication. Often this could contain sensitive information (PII etc) which would likely be a privacy violation if exposed.


Yeah medical info and HIPAA was the first thing I thought of when I saw this. My company doesn’t deal with medical data but I can’t imagine any company dealing with PII wants to use Zoom now.


You'd be surprised how often companies use PHI (legally) and healthcare provider networks don't care.

The few with smarter lawyers and IT departments, usually academic, do but a majority of all of the new "AI" health tech products I've heard about pitched to hospitals use customer PHI for product development.


The GDPR page of Zoom is just a joke: https://explore.zoom.us/en/gdpr/

They basically claim that the customer (the one who signs the contract, not the Zoom user) who hosts the meeting is responsible for GDPR compliance by defining the right account settings. So if you are invited on a call you basically have no rights.


It was a poor choice on their part. Some large employers were already on the fence about dropping zoom in favor of teams, and this is just going to push them over the edge.


Nope

We’re not buying it

The days of the “corporate responsibility” letter are over. Nothing you say will be believed if it conflicts with your bottom line.

There’s saying in Texas…won’t be fooled again


What I really want to know is, can I as someone compelled to use Zoom as an employee can opt-out of Zoom training their AI on my voice, image and data?

If my employer is the "customer" what say, if any, do I have as an individual?

By participating in a call am I giving Zoom permission to do things like train deep fakes of me?

This is all too Blackmirrory for my liking.


You're not compelled; you get to choose your employer.

It's much the same as the issue that was raised a few days ago, where your employer instructs or expects you to lie. The only way they have to "force" you is to threaten dismissal; this is insufficient to justify the terms "compel" or "force".

A police officer holding a Glock can compel you. Your boss cannot.


Resigning with a 3 months notice period would not fix the issue anyway. I would still have to use Zoom for that period.

I had the same issue when my EU-based employer was sold to an US company. My personal data suddenly went from EU to US-based HR systems without my consent. Resigning would not have fixed anything. My personal data will be in the US forever.


You don't even need to resign - you just simply refuse to use Zoom.


Question (I may know the answer), if the call is end to end encrypted, how can they use users content?


Zoom is not E2EE by default, and when E2EE is enabled, many Zoom features are not available, including presumably the new AI features.


> how can they use users content?

Read the TOS again. They are only speaking about customer consent. Not "user". If you are not the one signing the contract or are just invited to a call (not hosting) you basically have no rights to define settings such as any form of opt-out (assuming they exist).


What are the equivalent terms for Microsoft Teams?


And Google Meet


Ahh, so they're pulling the old "You can leave the meeting if you don't like becoming part of our training data" routine, and that's what they mean by consent.


To pile on the alternative recommendations: I've enjoyed using Whereby[0] in the past. I also somehow keep forgetting about them in these discussions until someone reminds me. The main thing of it for me is it's been more reliable than Jitsi (though I do need to give Jitsi another try) and doesn't need to be self-hosted.

[0]https://whereby.com/


Too late, what you publicly advertised online and what you have in the TOS are conflicting and false advertising.

But this is par for the course for Zoom.


Seems like they are addressing it heads-on


> For AI, we do not use audio, video, or chat content for training our models without customer consent.

Don't we provide consent when we agree to the TOS? And we can't use the product without doing that?


Zoom is getting desperate. They have been in decline since the end of the pandemic. That's what the analysts are saying.


Jokes on them, I'm already streaming my AI avatar.


Put that right in the TOS.


.. just today, we decided to cancel our K12 contract with zoom.

All 350+ workstations.


Moving to what?


What is K12? Kindergarten?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: