Back and white scales

Doe v. Meta and the Future of the Communications Decency Act

Originally published to the Transnational Litigation Blog, May 25, 2022.

By Alexander Preve

Two law firms recently filed a class action lawsuit on behalf of Rohingya refugees in the United States seeking at least $150 billion in compensatory damages from Meta (formerly Facebook). The plaintiffs in Doe v. Meta allege that Meta’s algorithms were designed to promote hate speech and misinformation about the Rohingya, a Muslim-minority population in Myanmar that has long been subject to discrimination and scapegoated by the Buddhist majority as terrorist “foreigners.” In response, Meta argues that it is immune from suit under the Communications Decency Act (“CDA”).

Although the plaintiffs attempt to plead around the limits of the CDA and Ninth Circuit precedent, these efforts are likely to fail. In an effort to sidestep Section 230 immunity, the plaintiffs contend that the district court should conduct a choice-of-law analysis solely on the immunity issue. This post first surveys the doctrinal landscape relating to Section 230. It then explains why the plaintiffs’ choice-of-law arguments are flawed and why the plaintiffs err in ignoring the presumption against extraterritoriality. It concludes with a proposal for how Congress should amend the CDA to achieve greater accountability when technology companies transition from publishers of information to creators of content.

Section 230 Immunity

This is not the first time a suit has alleged that a technology company has incited violence through its machine-learning algorithms. The Ninth Circuit has held that technology companies like Facebook, Twitter, and Google are immunized from civil liability when they act as “publishers” of information posted by third parties. The immunity provision of the CDA at issue, 47 U.S.C. § 230(c)(1), provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” From this provision, the Ninth Circuit has developed a three-fold test: a technology company enjoys immunity under the CDA if it is “(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.” However, the Ninth Circuit has also held that this grant of immunity only applies if the interactive computer service provider (like Meta) is not itself an information content provider, which the CDA defines as someone who is responsible (in whole or in part) for the creation or development of the offending content.

The plaintiffs in Doe v. Meta seek to leverage this distinction by characterizing Meta as an information content provider that created and proliferated anti-Rohingya hate speech through its machine-learning algorithms. It will be difficult for the plaintiffs to make this distinction in light of Ninth Circuit precedent. In Gonzales v. Google LLC, the plaintiffs sued Google, Facebook, and Twitter, alleging that these technology companies facilitated the terrorist activities of ISIS by recommending ISIS recruitment videos to users and enabling users to locate other videos and accounts related to ISIS. The Ninth Circuit found that the plaintiffs had not alleged (1) that Google’s algorithms had prompted ISIS to post unlawful content, or (2) that Google’s algorithms treated ISIS-created content differently than any other third-party created content. Thus, the Ninth Circuit held that these companies’ algorithms were “content-neutral” and entitled to Section 230 immunity. In Doe, the plaintiffs seek to distinguish Gonzalez by arguing that Meta’s algorithms in Myanmar were “far from neutral” because Meta developed a “dopamine-triggering reaction mechanism” that intentionally promoted the most hateful and divisive content regarding the Rohingya. The plaintiffs also argue that the theory of liability in Gonzalez (which relied on a “matchmaking” theory based on Google’s recommendations of terrorist content to other users) is different from the plaintiffs’ primary theory of liability, which alleges that Meta used reward-based algorithms that induced users to post harmful content.

These arguments are unlikely to succeed. The Ninth Circuit has interpreted Section 230 extremely broadly, particularly when it comes to machine-learning algorithms that recommend content and connections to users. The plaintiffs creatively argue that the CDA is inapplicable because they are suing Meta as the designer of a defective product rather than in its capacity as a publisher exercising editorial discretion. This is one of the plaintiffs’ stronger arguments. The Ninth Circuit recently considered the same argument in a lawsuit against Snapchat. In Lemmon v. Snap, Inc., the plaintiffs alleged that Snapchat negligently designed a “filter” that incentivized users to send photos while operating vehicles at high speeds. The Ninth Circuit held that Section 230 immunity did not apply because the plaintiffs sought to hold Snapchat liable for its distinct duty to design a reasonably safe product (rather than seeking to hold Snapchat liable as a publisher). The problem is that the legal theory in Doe rests on Meta’s acts and omissions as a moderator of third-party content, as opposed to Meta’s acts as the designer of an app.

The question of whether the district court will grant immunity to Meta is an interesting one. But there is also the question of when the court will rule on the immunity issue. It is unclear whether Section 230 immunity is like other forms of immunity (e.g., qualified immunity) that are frequently applied at the pleading stage to immunize defendants not just from liability, but from being subject to costly and potentially embarrassing discovery. Here, the plaintiffs contend that the immunity issue can be postponed to a later stage of the litigation. As the plaintiffs note, the Seventh Circuit has followed this approach and allowed discovery to proceed. If the district court denies Meta’s motion to dismiss and permits discovery, this could have broader implications for Meta (even if the Court later grants Section 230 immunity).

Doe v. Meta is being brought at a time when there is a growing consensus that Section 230 of the CDA, which was enacted in 1996, sweeps far too broadly. In Gonzalez, Judge Berzon concurred in the majority opinion but urged the Court “to reconsider [its] precedent en banc to the extent that it holds that section 230 extends to the use of machine-learning algorithms to recommend content and connections to users.” And as Professor Rebecca J. Hamilton notes in a recent article on accountability for platform-enabled crimes, there are bipartisan efforts to repeal or reform the CDA.

This case is a prime example of how times have changed. The plaintiffs in Doe note that the vast majority of Burmese citizens obtained cell phones in 2011, and Facebook arranged for them to use the Facebook app without incurring data charges. For many Burmese, Facebook was the internet, which made the dissemination of hateful rhetoric that much easier. The business reality today, when technology companies play a vital role in the dissemination of information, differs markedly from the reality that existed in 1996.

Presumption Against Extraterritoriality

The plaintiffs in Doe v. Meta argue that Section 230 does not apply on the facts presented. They claim that this statute conflicts with Burmese law, which (according to the plaintiffs’ expert declaration) does not immunize technology companies from liability for third-party content published on their platforms. The plaintiffs do not assert that Burmese law conflicts with the causes of action for strict product liability and negligence, which are governed by California law. Instead, the plaintiffs contend that a choice-of-law analysis is warranted solely for Meta’s asserted defense of immunity under the CDA, if the court finds that Section 230 immunity applies.

The plaintiffs’ arguments are flawed. The CDA is a federal statute, the applicability of which does not depend on state choice-of-law rules. To determine whether the CDA applies to claims arising out of Meta’s activities in Myanmar, the court should look instead to the federal presumption against extraterritoriality. The presumption against extraterritoriality is a canon of statutory interpretation that provides that, absent clearly expressed congressional intent to the contrary, federal laws are construed to only have domestic application. Under a two-step framework, courts first ask whether the statute provides a clear, affirmative indication that it applies extraterritorially. If not, then courts move on to the second step and ask whether the case involves a domestic application of the statute. To make that determination, courts look to the statute’s “focus.” The Supreme Court has defined the focus of a statute as the “object of its solicitude,” which can include the conduct the statute seeks to regulate and the parties or interests it seeks to protect or vindicate. If the conduct relevant to the statute’s focus occurred in the United States, then the case involves a permissible domestic application of the statute. But if the conduct relevant to the focus of the statute occurred in a foreign country, then the case involves an impermissible extraterritorial application of the statute.

In Gonzalez, the Ninth Circuit applied the presumption against extraterritoriality to Section 230. Since the CDA does not contain a clear, affirmative statement that it applies extraterritorially, the Ninth Circuit moved on to the second step and analyzed the focus of Section 230. Because the object of the CDA’s solicitude is to encourage providers of interactive computer services to monitor their websites by limiting their liability, the court held that “the relevant conduct occurs where immunity is imposed, which is where Congress intended the limitation of liability to have an effect, rather than the place where the claims principally arose.” The Ninth Circuit therefore concluded that the Gonzalez plaintiffs’ claims involved a domestic application of Section 230 and that the defendants were entitled to immunity.

The district court is likely to follow this reasoning and find that the Doe plaintiffs’ claims against Meta similarly involve a domestic application of Section 230. The plaintiffs seek to circumvent this analysis, arguing that “Gonzalez addressed the CDA extraterritoriality argument … [which is] an argument that Plaintiff[s] [are not] making here.” Unfortunately for the plaintiffs, the Ninth Circuit’s prior interpretation of Section 230 binds the district court regardless of whether the plaintiffs raise the presumption against extraterritoriality in their complaint.

The notion that the court must engage in a choice-of-law analysis under California law to determine whether a federal statute—the CDA—applies to the claims fundamentally misunderstands the nature of the inquiry. Courts do not apply state choice-of-law rules to determine the applicability of federal statutes. In support of their choice-of-law argument, plaintiffs cite Bassidji v. Goe, which applied California choice-of-law rules to decide whether an Executive Order prohibiting U.S. citizens from doing business with Iran applied to a contract made in Hong Kong. In Bassidji, however, the court ultimately held that federal law did apply by utilizing California’s public policy exception. Regardless of whether the Ninth Circuit was correct in Bassidji to examine the question through the lens of state choice of law rules, the plaintiffs have not cited any case in which courts have used such rules to displace federal law in favor of foreign law.

Conclusion

Doe v. Meta is a stark reminder of the need for Section 230 reform. Technology companies such as Meta exercise enormous influence over the dissemination of information, particularly in developing countries with low rates of digital literacy. Machine-learning algorithms can facilitate violence, misinformation, and atrocity crimes, and there is bipartisan consensus that the current system of self-regulation is outdated.

To address these issues, Congress should amend Section 230 of the CDA to provide that an individual or entity acts as an information content provider when it employs algorithmic friend-suggestion features. The Doe v. Meta complaint notes that Meta’s algorithms provide “friend suggestions” based on an analysis of users’ existing social connections on Meta and other behavioral and demographic data. The plaintiffs allege that Facebook’s friend suggestion algorithms connected violent extremists and susceptible potential violent actors who were sympathetic to the anti-Rohingya cause. By tailoring friend suggestions to individual users, Meta was arguably creating its own content rather than acting as a publisher exercising editorial discretion, which is the function that the CDA immunizes from liability. Friend-suggestion features are a creature of online social networks, which were virtually nonexistent when the CDA was enacted in 1996. By excluding friend-suggestion features from Section 230 immunity, Congress can modernize the CDA to (1) ensure some level of accountability when social media companies facilitate atrocity crimes and (2) more accurately capture the conduct that makes an individual or entity the publisher of third-party content.