Element 68Element 45Element 44Element 63Element 64Element 43Element 41Element 46Element 47Element 69Element 76Element 62Element 61Element 81Element 82Element 50Element 52Element 79Element 79Element 7Element 8Element 73Element 74Element 17Element 16Element 75Element 13Element 12Element 14Element 15Element 31Element 32Element 59Element 58Element 71Element 70Element 88Element 88Element 56Element 57Element 54Element 55Element 18Element 20Element 23Element 65Element 21Element 22iconsiconsElement 83iconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsElement 84iconsiconsElement 36Element 35Element 1Element 27Element 28Element 30Element 29Element 24Element 25Element 2Element 1Element 66

MEDIA RESEARCH BLOG

Blog Header Background Image
Zur Übersicht
Who is Zoom To Judge?

Who is Zoom To Judge?

30.09.2020

Why Engaging in Content Governance is Not a Good Idea for the Successful Video Conferencing Platform.
It's everyone's favorite online video communication tool these days. We cannot count the times we attended Zoom-based workshops in the last months, organized Zoom-based lectures or spoke at Zoom-based events. Apart from latent data protection concerns (taken seriously in Berlin, less seriously in Baden-Württemberg), which is usually tries to allay by pointing to its GDPR compliance, Zoom has managed to weather the current discussion on (and growing dissatisfaction with) private content governance rather well – cases when it bowed to Chinese pressure to close down accounts of US-based Zoom meetings related to the Tiananmen massacre notwithstanding.  
 
While Facebook and Twitter, and intermittently TikTok, have been heavily criticized for their approach to fighting online hate speech, Covid-19-related disinformation and untruth about elections, Zoom has escaped scrutiny. It is, after all, not a social network. This is true, but perhaps this honeymoon is over. 
Terrorism-Promotion as the No-Go of Content Governance
Last week, Zoom informed San Francisco State University that the company’s service may not be used for holding an event titled “Who’s Narratives? Gender, Justice and Resistance: a conversation with Leila Khaled”. Khaled is a member of the “Popular Front for the Liberation of Palestine” (PFLP), a far-left Palestinian organization engaged in political as well as armed opposition against Israel that is listed as terror organization both in the US and in the EU. She has become an icon for pro-Palestinian and far-left political fractions across the globe by committing two plane hijackings (1969/1970), after both of which she was set free (in exchange for hostages taken in further hijackings by the PFLP). She has held public talks in numerous countries.

During these trips, she has sometimes been refused entry, leading to her holding talks via video call instead. This fallback strategy might just have become more difficult. Zoom decided on Wednesday that “In light of the speaker’s reported affiliation or membership in a U.S. designated foreign terrorist organization, and SFSU’s inability to confirm otherwise, we determined the meeting is in violation of Zoom’s Terms of Service and told SFSU they may not use Zoom for this particular event". And it was not just Zoom: According to participants, Facebook took down the event page and YouTube, which the University turned to after Zooms refusal, stopped the livestream after Khaled started talking

Zoom’s decision was preceded by protests at its headquarters in San Jose, California and a letter addressed to Zoom by “The Lawfare Project”, a human rights organization with a mission to “protect the civil and human rights of the Jewish people worldwide”. This letter claims that Khaled’s “participation in this webinar and Zoom’s provision of its platform to communicate directly to U.S. college students (and others) may give rise to violations of 18 U.S.C. § 2339B”. Pointing toward the U.S. Supreme Court decision in the case Holder v. Humanitarian Law Project, the Lawfare Project claimed that Zoom’s video conference constitutes a “service” under 18 U.S.C. § 2339B. The organization further argued that due to the information contained in the letter, Zoom would now be providing this service “knowingly” as for the purposes of § 2339B and therefore risk criminal liability.
Was this accurate and Zoom’s possible (if implicit) fear of criminal liability under U.S. antiterrorism law justified?
 § 2339B forbids to “knowingly provide material support or resources to a foreign terrorist organization”. The term “material support or resources” is, albeit only broadly, defined in 18 U.S.C. § 2339A as “Any property, tangible or intangible, or service, including (..) financial services, lodging, training, expert advice or assistance, safehouses, (..)”. As pointed out by other researchers in 2019, the U.S. government has to date not taken the position that allowing a designated foreign terrorist organization to use a free and freely available online platform falls under “providing material support”. Above that, the Supreme Court stated in Holder v. Humanitarian Law that “The statute reaches only material support coordinated with or under the direction of a designated foreign terrorist organization” (p.26).

As Zoom would have acted neither in coordination with nor under the direction of the PFLP, it seems to us that Zoom did not have to fear a criminal liability under U.S. Antiterrorism law, had it allowed the event to be carried out via its video call. This case might therefore also serve as an example for the chilling effect overly broad antiterrorism laws can develop and the consequences such a constellation has when combined with private rulemaking. 
Did the event violate Zoom’s Terms of Service?
Zoom seems to have based its decision solely on Khaleds “reported affiliation or membership in a U.S. designated foreign terrorist organization”. Did the Terms of Service give Zoom the right to cancel the event for this reason? The Terms mention terrorism in Section 3. d. “Prohibited Use” (ix), essentially stating that the user agrees not to use, or to permit any other end user to use, the services in violation of any Zoom policy or in a manner that violates applicable law, including anti-terrorism laws and regulations. However, as we have seen before, a violation of the law was not imminent (neither by Zoom itself nor by San Francisco State University). With its decision to cancel the event, Zoom therefore likely went beyond its own terms of use.
How would this be solved in Germany?
In Germany, support for terrorism is, obviously, also prohibited. However, a much narrower corridor of interactions is penalized than in the U.S.: There is no general clause that criminalizes virtually all kinds of interactions with a terrorist organization. Next to the preparation of a “serious act of violent subversion”(§89a StGB), the establishment of relations with the intent to commit such an act (§89b StGB) and financing terrorism (§89c StGB), it is prohibited to provide access to instructions on how to commit such an act (§89c StGB). Interestingly, this clause covering “terror manuals” is generally not applicable if the instruction is provided only in a live feed such as in Zoom, because § 89c StGB refers to an out-of-date definition of “writings” (Schriften) in § 11 (3) StGB. This definition requires an “embodiment of a certain duration” (See BeckOK/Valerius § 11 StGB at 68; MüKo/Radtke § 11 StGB at 172 and the current government draft for a modernization, replacing the list of carrier mediums with the generic term “content”, all in German). Due to the comparably limited scope of the offences, a criminal liability of a content provider such as Zoom under antiterrorism law will remain very unlikely in Germany, even after the criminal code has been adapted to the supposed novelty of livestreams. 
(How) Has Zoom dealt with other content moderation issues?
This recent development is not the first step Zoom is taking towards content moderation: In June, as we’ve mentioned, Zoom closed the account of a group of prominent U.S.-based Chinese activists, who held a Zoom event in order to commemorate the anniversary of the Tiananmen Square Massacre. Zoom based its decision on an alleged violation of “local” (Chinese) law, even though the organizers of the event were in the US. After numerous complaints, Zoom noticed that, too, and released a statement saying the accounts had been reinstated because the users were not based in China. Furthermore, Zoom said it discovered a need for a capability to block meeting participants by country in order to comply with local law, promising to limit the impacts of Chinese governments requests to users within mainland China. This is not very different from the geolocation-based enforcement by major platforms of, for example, Germany’s laws regarding holocaust denial. Zoom further promised to improve the “global policy to respond to these types of requests”, which is supposed to be outlined as part of their transparency report later this year.
Should Zoom engage in content governance?
It seems with the growing popularity, a certain level of content “moderation” by Zoom is unavoidable. The live character of the communication on Zoom might keep the scale of this endeavor relatively manageable in comparison to the challenges faced by Facebook, Twitter or Google. On the other hand, the new, pandemic-induced importance of digital face-to-face communication provided by companies like Zoom underscores the need for fair, transparent and rights-sensitive content moderation standards for these services.

Although there seems to be a learning curve at Zoom regarding of these questions, this curve remains significantly flatter than Zoom’s growth rates. As of now, the company is lagging behind the discourse on human rights and internet governance. The challenge in which lies not in the technicalities of geo-blocking a service into compliance with different national legal regimes. The actual challenge is to firmly anchor corporate policies in international standards, to regularly assess human rights impacts, to open up to external monitoring and, possibly, to join a multistakeholder platform such as the Global Network Initiative. All of these measures would also contribute to insulating the company against contradictory or human rights-abusing national demands.
Conclusion
Facebook and Twitter, TikTok and YouTube, have been dealing with questions of content moderation for a long time. Zoom has up to now gotten a free pass. But perhaps this case in San Francisco is the harbinger of a time when Zoom events will be more closely monitored. If this holds true, one can only hope Zoom’s moderation standards catch up with its human rights responsibilities.
 
On the website of the San Francisco State University Professor co-organizing the debate, a linked post said that the event would go on: „The Zoom registration page continues to function“. So it seemed that Zoom’s ban was not total. The Facebook user went on to state: „There are a wide range of legal and political issues involved, especially given the Zoomification of higher education.“ This is undoubtedly true. But the question of how to govern online meetings, and which rules to apply, does not only concern higher education, but society as a whole.

Weitere Artikel

mehr anzeigen

Newsletter

Infos über aktuelle Projekte, Veranstaltungen und Publikationen des Instituts.

NEWSLETTER ABONNIEREN!