Examining the Controversy Over Zoom’s AI Training Policies

Photo of author
Written By Zach Johnson

AI and tech enthusiast with a background in machine learning.

Zoom found itself in hot water recently after an update to its terms of service sparked outrage over the video communications company’s artificial intelligence (AI) training practices. The revised terms, which went into effect in March 2023 appeared to give Zoom extensive rights to utilize customer data like videos, audio recordings, and chat transcripts to develop and improve its AI technologies. However, after intense backlash and scrutiny, Zoom ultimately clarified that it does not use customer content to train AI systems without consent. The company updated its terms of service again to codify this policy.

This controversy highlighted ongoing tensions around user privacy, transparency, and the ethics of training AI algorithms on human data. It also demonstrated the power of public pressure to compel companies to revise problematic policies. While Zoom now states it will not use customer data for unauthorized AI training, questions remain around informed consent and participants’ ability to opt-out. The debate illustrates the continued need to scrutinize tech companies’ data practices as the use of AI proliferates.

Zoom’s Previous Terms of Service

In March 2023, Zoom updated its terms of service, including new sections that authorized broad usage of customer data for product development and improvements. Two clauses in particular, 10.2 and 10.4, alarmed privacy advocates and users.

Section 10.2 stated that by using Zoom, customers consent to the company accessing, collecting, distributing, sharing, and storing “Service Generated Data” for “any purpose.” This encompassed telemetry data, product usage statistics, and diagnostic data collected by Zoom. The terms explicitly allowed using this customer data for “machine learning or artificial intelligence” like “training and tuning of algorithms and models.”

Section 10.4 went even further to grant Zoom a “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license” to use, distribute, modify, and even create derivative works from “Customer Content.” This included video, audio, and chat data generated through use of the platform. According to the terms, Zoom needed these broad rights to customer content for “providing services,” “product and service development,” and again, “machine learning” and “artificial intelligence.”

Critically, these clauses did not provide any way for customers to opt-out of having their data used in these ways. The terms allowed Zoom to leverage user content for training AI systems without requiring further consent.

Backlash Over AI Training Rights

In August 2023, tech publication Stack Diary published an analysis of Zoom’s updated terms highlighting the provisions allowing AI training on customer data. The report argued these terms enabled Zoom to “train its AI on customer content without providing an opt-out option.”

This report and the associated Hacker News thread sparked intense debate around ethical AI development and appropriate data usage policies. Many users expressed outrage that Zoom could exploit private customer data to improve its products without explicit opt-in consent. There were calls to boycott Zoom and encourage legal challenges to the terms of service.

In response to the uproar, Zoom published a blog post by its Chief Product Officer Smita Hashim defending its practices. Hashim wrote that Zoom does not actually use customer audio, video, or chat content to train AI models without consent. She stated that while Zoom analyzes usage telemetry and aggregated statistics, customer content like meeting recordings remains under owner control.

Following the continued backlash, Zoom ultimately updated its terms of service again. The company added language stating it will not use customer audio, video, or chat content for unauthorized AI training. However, questions remain around Zoom’s use of “service generated data” and transparency.

Zoom’s Current Policies

Under its revised terms, Zoom now codifies that it will not use customer content for AI training absent consent. For experimental AI features like meeting summaries, account owners and administrators must opt-in to enable these tools. According to Hashim’s blog post, there is a separate consent process shown before customer content can be used to develop the AI services.

Hashim wrote that if a customer does agree to provide data, it is “used solely to improve the performance and accuracy of these AI services.” Zoom insists it will not share willingly shared data with any third-party AI systems. Participants also receive in-meeting notifications when AI services are active.

However, Zoom’s updated terms still grant it broad rights to utilize “service generated data” like usage statistics and diagnostic data. Customers do not have a way to opt out of Zoom leveraging this data for “any purpose,” including AI development. This remains concerning to privacy advocates.

Transparency and Consent Questions

While Zoom now claims it will not use customer content for unauthorized AI training, important questions remain. When AI services are enabled by an account owner, do individual participants have any meaningful way to consent or opt-out before their data is used?

Zoom states account administrators represent participants and can provide consent on their behalf. However, participants may not have visibility into these decisions or understand how their data is being used.

There are also transparency concerns around Zoom’s intentions. The company only clarified its practices after public pressure, suggesting a reluctance to be open. And Zoom still reserves the right to use non-content data like usage statistics to improve its AI systems.

This situation illustrates the need for clear communication and proactive transparency from companies leveraging customer data for AI. Truly informed consent requires accessible policies plain language explanations of how data will be used.

Ongoing Debate Around Data Privacy

The reaction to Zoom’s terms of service demonstrated that users remain wary of companies exploiting personal data without explicit permission. While Zoom updated its policies to be more restrictive in response, questions persist about meaningful consent and participant rights.

As companies roll out more AI technologies that rely on customer data, they should prioritize transparency and provide granular controls around data usage. Users must be empowered to make informed decisions about if and how their personal information is used for AI training.

The furor over Zoom’s policies was a reminder that the public is watching how tech companies treat sensitive user data. While AI may hold great promise, it should not come at the expense of personal privacy and autonomy.

AI is evolving. Don't get left behind.

AI insights delivered straight to your inbox.

Please enable JavaScript in your browser to complete this form.