Opinion
21-16210
08-09-2024
Jed Rubenfeld (argued), Yale Law School, New Haven, Connecticut; Robert F. Kennedy Jr. (argued) and Mary S. Holland, Children's Health Defense, Peachtree City, Georgia; Roger I. Teich, Roger Teich, San Francisco, California; for Plaintiff-Appellant. Sonal N. Mehta (argued), Wilmer Cutler Pickering Hale and Dorr LLP, Palo Alto, California; Ari Holtzblatt, Molly M. Jennings, Allison Schultz, and Spencer Todd, Wilmer Cutler Pickering Hale and Dorr LLP, Washington, D.C.; Mark R. Caramanica (argued), Daniela Abratt-Cohen, and Carol J. LoCicero, Thomas &LoCicero PL, Tampa, Florida; Elizabeth H. Baldridge, Jenner &Block LLP, Los Angeles, California; Kevin L. Vick, Jassy Vick Carolan LLP, Los Angeles, California; for Defendants-Appellees. John W. Whitehead, The Rutherford Institute, Charlottesville, Virginia; for Amicus Curiae Rutherford Institute.
Argued and Submitted May 17, 2022
Appeal from the United States District Court for the Northern District of California Susan Illston, District Judge, Presiding
Jed Rubenfeld (argued), Yale Law School, New Haven, Connecticut; Robert F. Kennedy Jr. (argued) and Mary S. Holland, Children's Health Defense, Peachtree City, Georgia; Roger I. Teich, Roger Teich, San Francisco, California; for Plaintiff-Appellant.
Sonal N. Mehta (argued), Wilmer Cutler Pickering Hale and Dorr LLP, Palo Alto, California; Ari Holtzblatt, Molly M. Jennings, Allison Schultz, and Spencer Todd, Wilmer Cutler Pickering Hale and Dorr LLP, Washington, D.C.; Mark R. Caramanica (argued), Daniela Abratt-Cohen, and Carol J. LoCicero, Thomas &LoCicero PL, Tampa, Florida; Elizabeth H. Baldridge, Jenner &Block LLP, Los Angeles, California; Kevin L. Vick, Jassy Vick Carolan LLP, Los Angeles, California; for Defendants-Appellees.
John W. Whitehead, The Rutherford Institute, Charlottesville, Virginia; for Amicus Curiae Rutherford Institute.
Before: Eric D. Miller and Daniel P. Collins, Circuit Judges, and Edward R. Korman, [*] District Judge
SUMMARY [**]
First Amendment/Social Media
The panel affirmed the district court's dismissal of a complaint brought by the nonprofit advocacy organization Children's Health Defense (CHD) against Meta Platforms, Mark Zuckerberg, and others challenging Meta's policy of censoring Facebook posts conveying what CHD describes as accurate information challenging current government orthodoxy on vaccine safety and efficacy.
The panel noted that although Meta is a private corporation, in certain exceptional circumstances, a private party will be treated as a state actor for constitutional purposes. To do so, the private party must meet two distinct requirements: (1) the "state policy" requirement, which is satisfied when a private institution enforces a state-imposed rule instead of the terms of its own rules; and (2) the "state actor" requirement, which can be met by showing, among other things, willful participation in joint activity with the government or government coercion.
The panel held that CHD failed to meet the first requirement for state action because the source of CHD's alleged harm was Meta's own policy of censoring, not any provision of federal law. The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing. Moreover, CHD failed to allege any facts that would suggest an agreement between the government and Meta that required Meta to take a particular action in response to misinformation about vaccines or that the government coerced Meta into implementing a specific policy.
The panel held that CHD's inability to establish state action was fatal to all of its First Amendment claims-for damages under Bivens, for declaratory relief, and for an injunction. To the extent that CHD argued on appeal that Meta's disabling of its donation button was a "taking" under the Fifth Amendment, that claim failed for the same reason. The panel further rejected CHD's claim that the warning label and fact-checks Meta placed on its posts violated the Lanham Act, as well as CHD's civil RICO claim.
Concurring in part, concurring in the judgment in part, and dissenting in part, Judge Collins stated that CHD could plausibly allege a First Amendment claim for injunctive relief against Meta and he therefore dissented from the majority's contrary conclusion. However, he agreed that all of CHD's other claims were properly dismissed, and he therefore concurred in the judgment as to those remaining claims and in Parts III, IV, and V of the majority opinion. In Judge Collins's view, CHD can adequately plead state action under the test articulated in Skinner v. Railway Labor Executives Ass'n, 489 U.S. 602 (1989). Judge Collins would hold that given all the circumstances, Meta's interactions with the Government with respect to the suppression of specific categories of vaccine-related speech, and in particular the speech of CHD and its founder and chairman, Robert F. Kennedy, Jr., sufficed to implicate the First Amendment. Because CHD could amend its complaint in a manner that states a cause of action for injunctive and declaratory relief, he would reverse the district court's judgment in favor of Meta to the extent it held to the contrary.
OPINION
MILLER, Circuit Judge:
Children's Health Defense (CHD) is a nonprofit advocacy organization dedicated to educating the public about what it sees as the dangers of vaccines. The organization regularly shares articles and videos on its Facebook page, but since 2019, Meta Platforms, Inc., the operator of Facebook, has restricted CHD's ability to do so, including by adding warning labels to alert users that, in Meta's view, the information that CHD shares is not accurate.
Believing that Meta was censoring its speech at the direction of the federal government, CHD brought this action against Meta; Mark Zuckerberg, Meta's CEO; and the Poynter Institute and Science Feedback, both of which contract with Meta to evaluate the accuracy of some Facebook content. It asserted claims under the First and Fifth Amendments as well as the Lanham Act, 15 U.S.C. § 1125(a), and the Racketeer Influenced and Corrupt Organizations Act (RICO), 18 U.S.C. § 1962. The district court dismissed the complaint. We affirm.
I
Because this is an appeal from an order granting a motion to dismiss, we assume the truth of the facts alleged in the operative complaint-here, CHD's second amended complaint. Ellis v. Salt River Project Agric. Improvement &Power Dist., 24 F.4th 1262, 1266 (9th Cir. 2022). After filing that complaint, CHD moved to "supplement" it with additional allegations, filed a motion for judicial notice that contained further allegations, and then moved to "further supplement" the complaint. The district court denied CHD leave to amend the complaint but considered the allegations within CHD's motions "as a further proffer of how CHD would amend the complaint if given leave to do so." We have likewise considered those allegations, and they are reflected in the description of the facts set out below.
CHD describes itself as an organization that seeks "to provide the public with timely and accurate vaccine and 5G and wireless technology safety information." To that end, CHD publishes articles and opinion pieces on its eponymous website and on its Facebook page. Those writings often describe purported links between vaccinations and various illnesses. CHD has posted articles that it claims show that "[u]nvaccinated kids are healthier" than their vaccinated counterparts. Sometimes, CHD posts messages from its founder, Robert F. Kennedy, Jr., in which he criticizes Dr. Anthony Fauci and Bill Gates and their efforts to encourage vaccinations.
Although public discussion of vaccines has taken on a new dimension as a result of the COVID-19 pandemic, some lawmakers have expressed concern about the proliferation of "vaccine misinformation" on social media platforms for several years. In February 2019, Representative Adam Schiff of California sent a public letter to Zuckerberg, asking (1) whether "medically inaccurate information" violated Facebook's terms of service; (2) what steps Facebook took to address "misinformation related to vaccines" and whether it planned to take additional steps; (3) whether Facebook allowed anti-vaccine activists and organizations to advertise on its platform; and (4) what steps Facebook took to prevent its algorithm from recommending anti-vaccine content to users. After COVID-19 vaccines became widely available, some lawmakers expressed renewed concern that social media companies like Meta were not doing enough to slow the spread of false information about the virus and vaccines. For example, Senator Amy Klobuchar of Minnesota wrote to Zuckerberg, stating that Facebook's "policies must be strictly enforced to limit users' exposure to misinformation" and urging him to "take action against people that are spreading content that can harm the health of Americans."
For its part, Meta announced in early 2019 that it had begun to "tackle vaccine misinformation" on Facebook by making that content less prominent in search results, rejecting ads that included it, and "exploring ways to share educational information about vaccines when people come across misinformation on this topic." It promised to "take action" against posts that shared "verifiable vaccine hoaxes," as defined by the World Health Organization (WHO) and the U.S. Centers for Disease Control and Prevention (CDC).
After those policies were announced, CHD noticed changes to the functionality and appearance of its Facebook page. A banner was placed at the top of its page, with a message that read:
This Page posts about vaccine
When it comes to health, everyone wants reliable, up-to-date information. The Centers for Disease Control (CDC) has information that can help answer questions you may have about vaccines.
Go to CDC.gov
Around the same time, Meta began flagging CHD's posts as containing factual inaccuracies. To identify content posted on Facebook that it considers inaccurate, Meta contracts with the Poynter Institute (which operates a website known as "PolitiFact") and Science Feedback. Specifically, Meta directs those services to review and classify content that its algorithms have identified as potentially containing "misinformation." If the reviewers determine that the content contains false or misleading information, it may appear under a grey overlay that informs readers that the post has been labeled false and refers them to a link so that they can "See Why." The link leads users to a new window that contains a short explanation of the classification-for example, that independent fact-checkers have determined that the information shared in the post is "factually inaccurate." The contents of the flagged post remain accessible, but visitors must click a slightly less prominent link in order to view it. If Meta determines that the post violates Facebook's Community Standards, it may be removed entirely.
After identifying repeated factual inaccuracies in CHD's posts, Meta deactivated the "donate" button on CHD's page, telling the group that it had violated Facebook's "fundraising terms and conditions." Before this happened, CHD had received more than $40,000 in donations through its Facebook page in 2019. Meta also prohibited CHD, Kennedy, and an agency employed by the two from purchasing advertisements on Facebook because, it said, CHD had "repeatedly posted content that has been disputed by third-party fact-checkers [for] promoting false content."
As part of its response to the COVID-19 pandemic, Meta took further action against CHD. It updated Facebook's policies to prohibit users from sharing any "claims that COVID-19 vaccines are not effective in preventing COVID-19," and it created a "Coronavirus (COVID-19) Information Center," which links to the CDC's website and other "leading health organizations" for information on the pandemic. Meta then began displaying messages to CHD's followers encouraging them to unsubscribe from its posts and referring them to the WHO for facts about COVID-19.
CHD alleges that Meta has also limited the visibility of its content using processes known as "shadow-banning" and "sandboxing." With shadow-banning, Meta allows a post to remain visible to the poster, and in some cases the poster's Facebook "friends," while hiding the post from other users. With sandboxing, Meta shows CHD's posts about vaccines to like-minded users but not to those who do not already share its views. CHD says that, as a result, traffic to its website from its Facebook page has declined significantly. Although CHD once had the ability to dispute the actions Meta took with respect to its page, Meta disabled that functionality, and it has not been restored.
In August 2020, CHD brought this action in the Northern District of California. It alleged that Meta, Zuckerberg, the Poynter Institute, and Science Feedback were working in concert with or, alternatively, under compulsion from the federal government to censor CHD's speech, in violation of the First Amendment, and to deprive it of its property right to fundraise on Facebook, in violation of the Fifth Amendment. Based on those alleged constitutional violations, CHD sought damages under Bivens v. Six Unknown Named Agents of Federal Bureau of Narcotics, 403 U.S. 388 (1971). It also sought injunctive and declaratory relief. CHD further claimed that the defendants violated the Lanham Act by labeling its posts as false, and that the defendants imposed those labels as part of a fraudulent scheme to divert donations away from CHD for the benefit of Meta's fact-checkers, in violation of RICO. CHD sought money damages as well as injunctive and declaratory relief for those claims too.
Meta, Zuckerberg, and the Poynter Institute moved to dismiss, and the district court dismissed the complaint without leave to amend. The court held that CHD's constitutional claims failed because "CHD has not alleged that the challenged acts constitute federal action." Specifically, the court determined that "general statements by the CDC and Zuckerberg about 'working together' to reduce the spread of health or vaccine misinformation, or to promote universal vaccination do not show that the government was a 'joint participant in the challenged activity.'" It emphasized that CHD had not "alleged that the government was actually involved in the decisions to label CHD's posts as 'false' or 'misleading,' the decision to put the warning label on CHD's Facebook page, or the decisions to 'demonetize' or 'shadow-ban.'" The court further concluded that "CHD has not alleged facts showing government coercion sufficient to deem Facebook or Zuckerberg a federal actor."
The district court also rejected the Lanham Act claim. It explained that "the warning label and fact-checks are not disparaging CHD's 'goods or services,' nor are they promoting the 'goods or services' of Facebook, the CDC, or the fact-checking organizations such as Poynter." For those reasons, the court concluded that "CHD's alleged injuries are not within the Lanham Act's 'zone of interests' and that the warning label and fact-checks are not 'commercial advertising or promotion'" within the scope of the statute.
The district court rejected the RICO claim because CHD had not established a predicate act of wire fraud. The court stated that "CHD's allegations . . . do not constitute wire fraud because CHD has not alleged any facts showing that defendants engaged in a fraudulent scheme to obtain money or property from Facebook visitors to CHD's page."
Science Feedback is a French nonprofit organization, and CHD was apparently unable to serve it with process. As a result, the district court dismissed all claims against Science Feedback without prejudice.
CHD appeals. We review the district court's grant of a motion to dismiss de novo. Wells Fargo Bank, N.A. v. Mahogany Meadows Ave. Tr., 979 F.3d 1209, 1213 (9th Cir. 2020). "To survive a motion to dismiss, a complaint must contain sufficient factual matter, accepted as true, to 'state a claim to relief that is plausible on its face.'" Ashcroft v. Iqbal, 556 U.S. 662, 678 (2009) (quoting Bell Atl. Corp. v. Twombly, 550 U.S. 544, 570 (2007)).
II
The First Amendment provides that "Congress shall make no law . . . abridging the freedom of speech." U.S. Const. amend. I. Within its scope, the First Amendment provides robust protection for free speech. But it has an important limitation: It "prohibits only governmental abridgment of speech" and "does not prohibit private abridgment of speech." Manhattan Cmty. Access Corp. v. Halleck, 587 U.S. 802, 808 (2019); see Prager Univ. v. Google LLC, 951 F.3d 991, 996 (9th Cir. 2020).
That limitation is itself an important protection for liberty. If the First Amendment were applied to private actors, it would mean, for example, that a newspaper would be unable to choose to print the work of only those writers whose views were consistent with its editorial positions, and it could instead be forced by the federal courts to open itself to all writers on a nondiscriminatory basis. See Miami Herald Publ'g Co. v. Tornillo, 418 U.S. 241, 254-58 (1974). "By enforcing [the] constitutional boundary between the governmental and the private, the state-action doctrine" developed by the Supreme Court to distinguish government from private action "protects a robust sphere of individual liberty." Halleck, 587 U.S. at 808; accord Lugar v. Edmondson Oil Co., 457 U.S. 922, 936 (1982) ("Careful adherence to the 'state action' requirement preserves an area of individual freedom by limiting the reach of federal law and federal judicial power.").
To begin by stating the obvious, Meta, the owner of Facebook, is a private corporation, not a government agency. Although that fact is highly relevant here, it does not quite end our inquiry because, in certain "exceptional cases," a private party "will be treated as a state actor for constitutional purposes." O'Handley v. Weber, 62 F.4th 1145, 1155-56 (9th Cir. 2023). The private party must meet two distinct requirements: (1) the "state policy" requirement and (2) the "state actor" requirement. Wright v. Service Emps. Int'l Union Loc. 503, 48 F.4th 1112, 1121 (9th Cir. 2022); see Lugar, 457 U.S. at 937; O'Handley, 62 F.4th at 1156.
To satisfy the state policy requirement, the alleged constitutional deprivation must result from "the exercise of some right or privilege created by the State" or "a rule of conduct imposed by the State or by a person for whom the State is responsible." Lugar, 457 U.S. at 937. To satisfy the state actor requirement, the party must "fairly be said to be a state actor," id., which requires that it meet one of four tests: (1) the private actor performs a traditionally public function, Halleck, 587 U.S. at 804; (2) the private actor is a "willful participant in joint activity" with the government, Lugar, 457 U.S. at 941 (quoting Adickes v. S. H. Kress & Co., 398 U.S. 144, 152 (1970)); (3) the government compels or encourages the private actor to take a particular action, Blum v. Yaretsky, 457 U.S. 991, 1004 (1982); or (4) there is a "sufficiently close nexus" between the government and the challenged action, Jackson v. Metropolitan Edison Co., 419 U.S. 345, 351 (1974).
This test for state action "ensures that not all private parties 'face constitutional litigation whenever they seek to rely on some state rule governing their interactions with the community surrounding them.'" Collins v. Womancare, 878 F.2d 1145, 1151 (9th Cir. 1989) (quoting Lugar, 457 U.S. at 937). At bottom, both components of the test ask us to evaluate whether the nature of the relationship between the private party and the government is such that "the alleged infringement of federal rights is fairly attributable to the [government]." Pasadena Republican Club v. Western Just. Ctr., 985 F.3d 1161, 1167 (9th Cir. 2021) (alteration in original) (quoting Sutton v. Providence St. Joseph Med. Ctr., 192 F.3d 826, 835 (9th Cir. 1999)). In other words, a plaintiff must allege facts supporting an inference that the government "is responsible for the specific conduct of which the plaintiff complains." Ohno v. Yasuma, 723 F.3d 984, 994 (9th Cir. 2013) (quoting Blum, 457 U.S. at 1004).
A
We first look to whether the "'source of the alleged constitutional harm'" is "a state statute or policy." Belgau v. Inslee, 975 F.3d 940, 947 (9th Cir. 2020) (quoting Ohno, 723 F.3d at 994). This requirement is satisfied when a private institution "enforce[s] a state-imposed rule" instead of "the terms of its own rules." O'Handley, 62 F.4th at 1156.
CHD's state-action theory fails at this threshold step. We begin our analysis by identifying the "specific conduct of which the plaintiff complains." Wright, 48 F.4th at 1122 (quoting American Mfrs. Mut. Ins. Co. v. Sullivan, 526 U.S. 40, 51 (1999)). CHD challenges Meta's "policy of censoring" posts conveying what it describes as "accurate information . . . challenging current government orthodoxy on . . . vaccine safety and efficacy." But "the source of the alleged . . . harm," Ohno, 723 F.3d at 994, is Meta's own "policy of censoring," not any provision of federal law. The closest CHD comes to alleging a federal "rule of conduct" is the CDC's identification of "vaccine misinformation" and "vaccine hesitancy" as top priorities in 2019. But as we explain in more detail below, those statements fall far short of suggesting any actionable federal "rule" that Meta was required to follow. And CHD does not allege that any specific actions Meta took on its platforms were traceable to those generalized federal concerns about vaccine misinformation.
In O'Handley, we rejected a claim similar to CHD's asserted by a Twitter user who objected to Twitter's decision to limit access to his tweets and suspend his account. See 62 F.4th at 1156. The user alleged that Twitter's actions were prompted by a message from the California Secretary of State identifying one of the user's tweets as spreading election-related "disinformation." Id. at 1154. But because "the company acted under the terms of its own rules, not under any provision of California law," we rejected the argument that Twitter "ceded control over [its] contentmoderation decisions to the State and thereby became the government's private enforcer[]." Id. The same is true here.
B
CHD's failure to satisfy the first part of the test is fatal to its state action claim. See Lindke v. Freed, 601 U.S. 187, 198, 201 (2024); but see O'Handley, 62 F.4th at 1157 (noting that our cases "have not been entirely consistent on this point"). Even so, CHD also fails under the second part. As we have explained, the Supreme Court has identified four tests for when a private party "may fairly be said to be a state actor": (1) the public function test, (2) the joint action test, (3) the state compulsion test, and (4) the nexus test. Lugar, 457 U.S. at 937, 939.
CHD invokes two of those theories of state action as well as a hybrid of the two. First, it argues that Meta and the federal government agreed to a joint course of action that deprived CHD of its constitutional rights. Second, it argues that Meta deprived it of its constitutional rights because government actors pressured Meta into doing so. Third, it argues that the "convergence" of "joint action" and "pressure," as well as the "immunity" Meta enjoys under 47 U.S.C. § 230, make its allegations that the government used Meta to censor disfavored speech all the more plausible. CHD cannot prevail on any of these theories.
1
The joint action test asks "whether the government has so far insinuated itself into a position of interdependence with a private entity that the private entity must be recognized as a joint participant in the challenged activity." Pasadena Republican Club, 985 F.3d at 1167 (quoting Brunette v. Humane Soc'y of Ventura Cnty., 294 F.3d 1205, 1210 (9th Cir. 2002)). Our cases require a plaintiff to plead facts that give rise to an inference that the private entity's "particular actions are 'inextricably intertwined' with those of the government." Id. (quoting Brunette, 294 F.3d at 1211).
Crucially, it is not enough to show an agreement to do something; the private party and the government must also have agreed on what the something is. "The generalized allegation of a wink and a nod understanding . . . does not amount to an agreement or a conspiracy to violate [the plaintiff's] rights in particular." Brunette, 294 F.3d at 1212. Thus, a plaintiff must show some specificity to the understanding between the private actor and the government. See Dennis v. Sparks, 449 U.S. 24, 25-28 (1980) (agreement between litigants and judge to issue an illegal injunction preventing production on oil leases); Adickes, 398 U.S. at 152 (agreement between store employee and police officer to arrest the plaintiff); Swift v. Lewis, 901 F.2d 730, 731-32 &n.2 (9th Cir. 1990) (agreement between prison and contractor to remove petitioner's religious classification), superseded by statute on other grounds, 42 U.S.C. § 2000cc-1(a); Howerton v. Gabica, 708 F.2d 380, 384-85 (9th Cir. 1983) (agreement between landlord and police officer to evict plaintiffs).
CHD has not done so. In an effort to show an agreement, CHD points to various statements from Meta and government officials, but they suffer from a critical lack of specificity. For example, CHD highlights the CDC's statement that it has "engaged" social media companies to "contain the spread of misinformation." That could mean many different things, thanks to ambiguity in both the verb ("Containing" misinformation by removing it entirely? By making it less prominent on the site? By leaving it as is but countering it with different information?) and its object (What counts as "misinformation"?). The "generic promotion of a public purpose" falls far short of establishing that Meta's "particular actions are inextricably intertwined with those of the government." Pasadena Republican Club, 985 F.3d at 1167, 1170 (internal quotation marks omitted). Without plausible allegations of an agreement to take specific action, we cannot say that Meta's conduct is fairly attributable to the government.
CHD asks us to infer a more specific agreement among Meta, the Biden Administration, the CDC, and the WHO, in which Meta took direction from those entities about what content to censor. But the facts that CHD alleges do not make that inference plausible in light of the obvious alternative-that the government hoped Meta would cooperate because it has a similar view about the safety and efficacy of vaccines. See Twombly, 550 U.S. at 556-57. Links between a social media company's communications with the government and its decisions about what content to permit "must be evaluated in light of the platform's independent incentives to moderate content." Murthy v. Missouri, 144 S.Ct. 1972, 1988 (2024) (rejecting similar claims that government officials and agencies pressured platforms to unconstitutionally suppress COVID-19 misinformation). Statements that government officials "engaged" with social media companies to ensure that those companies "understand the importance of misinformation and disinformation and how they can get rid of it quickly" are consistent with the explanation of parallel objectives and do not show the specific agreement that CHD suggests. As for the WHO, it is an intergovernmental agency, not part of the federal government, so its meeting with Meta in which it "discussed" Meta's "role in spreading 'lifesaving health information'" is irrelevant to the state-action inquiry.
In a belated attempt to bolster its theory, CHD asks us to take judicial notice of various documents showing that the government works with social media companies to educate them about what it considers to be misinformation on their platforms. "[W]e rarely take judicial notice of facts presented for the first time on appeal." Reina-Rodriguez v. United States, 655 F.3d 1182, 1193 (9th Cir. 2011). We do not think it is appropriate to do so here, especially because the facts CHD presents are not free from "reasonable dispute." Fed.R.Evid. 201(b). To be sure, it would be proper to take judicial notice of the fact that the documents exist. Cf. Lee v. City of Los Angeles, 250 F.3d 668, 690 (9th Cir. 2001). But CHD's allegations rely on the substance of the documents and what the statements in them establish. Because those statements are "subject to varying interpretations," they cannot qualify for judicial notice. Reina-Rodriguez, 655 F.3d at 1193.
In any event, even if we were to consider the documents, they do not make it any more plausible that Meta has taken any specific action on the government's say-so. To the contrary, they indicate that Meta and the government have regularly disagreed about what policies to implement and how to enforce them. See Murthy, 144 S.Ct. at 1987 (highlighting evidence "that White House officials had flagged content that did not violate company policy"). Even if Meta has removed or restricted some of the content of which the government disapproves, the evidence suggests that Meta "had independent incentives to moderate content and . . . exercised [its] own judgment" in so doing. Id.
That the government submitted requests for removal of specific content through a "portal" Meta created to facilitate such communication does not give rise to a plausible inference of joint action. Exactly the same was true in O'Handley, where Twitter had created a "Partner Support Portal" through which the government flagged posts to which it objected. 62 F.4th at 1160. Meta was entitled to encourage such input from the government as long as "the company's employees decided how to utilize this information based on their own reading of the flagged posts." Id. It does not become an agent of the government just because it decides that the CDC sometimes has a point.
The circumstantial evidence that CHD proffers does not nudge its claims into the realm of plausibility either. CHD alleges, for example, that when Meta deactivated the "donate" button on CHD's page in May 2019, it must have done so because of the letter Representative Schiff sent to Meta that February. In the letter, Schiff expressed concern that misleading or incorrect information about vaccines was leading to a decline in vaccine uptake. He asked Meta to explain how it dealt with such content on its platform, and he "encourage[d] [Meta] to consider . . . additional steps." It is simply not reasonable to infer from those two events that Meta "takes direction from the federal government about what COVID-related speech to censor," as CHD would have it.
Failing to allege a plausible agreement between Meta and the government, CHD seeks to fill the gap by arguing that the CDC supplied Meta with a "standard of decision" by which allegedly unconstitutional actions were taken. Pointing to statements from Zuckerberg announcing that Meta defers to the CDC for "authoritative information," CHD asserts that the algorithms Meta implemented to flag misinformation apply "agreed-to, government-provided standards of decision."
CHD invokes Mathis v. Pacific Gas &Electric Co. (Mathis I), in which we allowed a Bivens action to proceed against PG&E, a public utility company, because we concluded that a former employee of a PG&E contractor had plausibly alleged that PG&E denied him access to a nuclear power plant "on the basis of some rule of decision for which the State is responsible." 891 F.2d 1429, 1432 (9th Cir. 1989) (quoting Rendell-Baker v. Kohn, 457 U.S. 830, 843 (1982) (White, J., concurring in the judgments)). Specifically, Mathis had been denied access to the plant and then fired because he was suspected of selling or using illegal drugs. Id. at 1430, 1432-33. He alleged that PG&E denied him access because the Nuclear Regulatory Commission had pressured and encouraged PG&E to adopt a policy of excluding from nuclear power facilities anyone who sold or used drugs. See id. at 1433; see also Mathis v. Pacific Gas &Elec. Co. (Mathis II), 75 F.3d 498, 502-03 (9th Cir. 1996) (requiring a showing that a "standard compelled" a certain decision). If that allegation were true, we reasoned, it would establish that PG&E's decision to exclude Mathis was fairly attributable to the government. Mathis I, 891 F.2d at 1434.
The allegations here are far different from those in Mathis I. Mathis alleged the existence of an informal government policy that required the utility to take a specific action in response to certain events. CHD has alleged that Meta banned "vaccine misinformation" and that it defers to the CDC for "authoritative information" on that topic. It has failed, however, to allege any facts that would allow us to infer an agreement between the government and Meta that required Meta to take a particular action in response to misinformation about vaccines. Further, as we have already explained, "misinformation" is far too amorphous a concept to serve as the type of "standard" contemplated by Mathis I. See 891 F.2d at 1433-34. And without a standard that can plausibly be said to require a specific outcome, it is not fair to say that Meta's "choice must in law be deemed to be that of the State." Blum, 457 U.S. at 1004.
Finally, CHD argues that financial benefits flowing from Meta to the government support an inference that Meta's conduct constitutes state action. In so doing, it invokes the Supreme Court's observation that a plaintiff may "sometimes" be able to prove government responsibility for a nominally private action if the government "knowingly accepts the benefits derived from unconstitutional behavior." National Collegiate Athletic Ass'n v. Tarkanian, 488 U.S. 179, 192 (1988). The putative "benefits" here are $35 million Zuckerberg and Meta have donated to the CDC Foundation and the "millions of dollars in free advertising" and reputational benefits Meta has given the CDC. But those benefits are not directly tied to the specific action being challenged in this case: restricting CHD's Facebook posts. CHD therefore has not alleged the kind of "significant financial integration" that we have found probative in determining whether the joint-action test is satisfied. Pasadena Republican Club, 985 F.3d at 1168 (quoting Brunette, 294 F.3d at 1213). To the contrary, Zuckerberg and Meta's donations to the CDC Foundation make the innocent alternative-that Meta adopted the policy it did simply because Zuckerberg and Meta share the government's view that vaccines are safe and effective-all the more plausible.
We acknowledge that there is a degree of uncertainty in determining how specific the details of an agreement must be before a plaintiff can be said to have plausibly alleged joint action. The Supreme Court has remarked that the stateaction inquiry is a "matter of normative judgment" whose "criteria lack rigid simplicity," so some uncertainty is inherent in the doctrine. Brentwood Acad. v. Tennessee Secondary Sch. Athletic Ass'n, 531 U.S. 288, 295 (2001). But wherever the line may be, CHD is far from it. Rather than pleading facts that allow us to infer that the government and Meta agreed to censor speech on Facebook, CHD has alleged that the government hoped Meta would cooperate in its efforts to promote the safety and efficacy of vaccines.
Meta has a First Amendment right to use its platform to promote views it finds congenial and to refrain from promoting views it finds distasteful: "Like . . . editors, cable operators, and parade organizers," social media companies make "choices about whether-and, if so, how-to convey posts having a certain content or viewpoint" that "rest on a set of beliefs about which messages are appropriate and which are not." Moody v. NetChoice, LLC, 144 S.Ct. 2383, 2405 (2024). Even though companies like Meta "happily convey the lion's share of posts submitted to them," it remains "as much an editorial choice to convey all speech except in select categories as to convey only speech within them." Id. at 2406. "When the platforms use their Standards and Guidelines to decide which third-party content [their] feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection." Id.
Meta evidently believes that vaccines are safe and effective and that their use should be encouraged. It does not lose the right to promote those views simply because they happen to be shared by the government.
2
A private party may also be considered a state actor if it has acted because the government coerced or compelled it to do so. Under the coercion test, the government must have "exercised coercive power or . . . provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the State." Blum, 457 U.S. at 1004. The government's "[m]ere approval of" private initiatives, however, "is not sufficient to justify holding the State responsible for those initiatives." Id. at 1004-05. Instead, the government must "convey a threat of adverse government action," National Rifle Ass'n of Am. v. Vullo, 602 U.S. 175, 191 (2024), or otherwise impose incentives that "overwhelm" and "essentially compel" the party to comply with its requests, O'Handley, 62 F.4th at 1158.
At the outset, we note that there is reason to doubt that a purely private actor like Meta, which was the victim of the alleged coercion, would be the appropriate defendant, rather than the government officials responsible for the coercion. In Sutton v. Providence St. Joseph Medical Center, for example, we said that "only the state actor, and not the private party, should be held liable for the constitutional violation that resulted from the state compulsion." 192 F.3d at 838 (quoting Barbara Rook Snyder, Private Motivation, State Action and the Allocation of Responsibility for Fourteenth Amendment Violations, 75 Cornell L. Rev. 1053, 1067 (1990)). But see generally Carlin Commc'ns, Inc. v. Mountain States Tel. &Tel. Co., 827 F.2d 1291 (9th Cir. 1987). We need not resolve that question here because CHD has not adequately pleaded facts supporting a coercion theory of state action.
CHD's theory of coercion turns on statements made by lawmakers threatening to hold social media companies "accountable" for failing to police "misinformation" on their platforms. Those statements do not meet the standard we have articulated for finding state action. Here again, the key case arises from the Mathis litigation, this time Mathis II. In his second appeal, Mathis argued that he had proved his claim that PG&E excluded him from the power plant because it applied a "standard of decision" imposed on the utility by the government. Mathis II, 75 F.3d at 502 (quoting Mathis I, 891 F.2d at 1434). But rather than demonstrating that the government pressured the utility into adopting a specific policy requiring his exclusion, Mathis showed only that PG&E "was aware of a generalized federal concern with drug use at nuclear power plants" and that PG&E "was looking to score Brownie points" with the government by adopting a policy to address that concern. Id. Mathis argued that he came "close enough" to proving coercion because his evidence gave rise to the inference that PG&E implemented a drug-use policy to "allay [the government's] concerns." Id. at 503. But we rejected that argument. We explained that Mathis "asks us to hold that regulatory interest in a problem transforms any subsequent private efforts to address the problem (even those expressly designed to obviate the need for regulation) into state action." Id. We refused to do so, adding that "[i]f the government is considering regulation, affected private parties can try to convince it there's no need to regulate without thereby transforming themselves into the state's agents." Id.
CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy. Instead, it cites statements by Members of Congress criticizing social media companies for allowing "misinformation" to spread on their platforms and urging them to combat such content because the government would hold them "accountable" if they did not. Like the "generalized federal concern[s]" in Mathis II, those statements do not establish coercion because they do not support the inference that the government pressured Meta into taking any specific action with respect to speech about vaccines. Mathis II, 75 F.3d at 502. Indeed, some of the statements on which CHD relies relate to alleged misinformation more generally, such as a statement from then-candidate Biden objecting to a Facebook ad that falsely claimed that he blackmailed Ukrainian officials. All CHD has pleaded is that Meta was aware of a generalized federal concern with misinformation on social media platforms and that Meta took steps to address that concern. See id. If Meta implemented its policy at least in part to stave off lawmakers' efforts to regulate, it was allowed to do so without turning itself into an arm of the federal government. See id. at 503.
CHD argues that the letters sent to Meta by Senator Klobuchar and Representative Schiff demonstrate the necessary coercion. In one of Klobuchar's letters, she urged Meta to "take action" against prominent anti-vaccine influencers such as Kennedy. In another letter, she asked Meta a series of questions about its handling of "vaccine-related misinformation," told it that transparency was "imperative," and said that "policies must be strictly enforced to limit users' exposure to misinformation." Schiff's letter was along similar lines. But in contrast to cases where courts have found coercion, the letters did not require Meta to take any particular action and did not threaten penalties for noncompliance. See National Rifle Ass'n of Am., 602 U.S. at 193 (agency superintendent promising to "ignore" insurance-law violations if insurer "ceased underwriting NRA policies and disassociated from gun-promotion groups"); Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 62 n.5 (1963) (state commission notifying book distributor of the names of "obscene publications" that were "objectionable for sale" and implying that the Attorney General would prosecute if the bookseller did not cooperate); Carlin Commc'ns, 827 F.2d at 1295 (county attorney threatening to prosecute if a utility did not terminate plaintiff's service); Backpage.com, LLC v. Dart, 807 F.3d 229, 230-31 (7th Cir. 2015) (county sheriff demanding that credit card companies "immediately cease and desist" from allowing their cards to be used to purchase advertisements on an adult website); Okwedy v. Molinari, 333 F.3d 339, 341-42 (2d Cir. 2003) (per curiam) (city borough president objecting to a message on a billboard designed by a media company and directing the company to contact his counsel).
Moreover, "[t]he power that a government official wields . . . is relevant to the objective inquiry of whether a reasonable person would perceive the official's communication as coercive." National Rifle Ass'n of Am., 602 U.S. at 191. "[D]irect regulatory and enforcement authority," such as the ability to "initiate investigations and refer cases for prosecution," makes coercion more likely. Id. at 192. By contrast, "[a] letter from a single Senator backed by no statutory mandate is far afield from [a] system of 'effective state regulation'" that would suggest coercion. Kennedy v. Warren, 66 F.4th 1199, 1210 (9th Cir. 2023) (quoting Bantam Books, 327 U.S. at 69). Unlike "an executive official with unilateral power that could be wielded in an unfair way if the recipient did not acquiesce," a single legislator lacks "unilateral regulatory authority." Id. A letter from a legislator would therefore "more naturally be viewed as relying on her persuasive authority rather than on the coercive power of the government." Id.
The statements here are firmly on the constitutional side of the sometimes "fine lines between permissible expressions of personal opinion and implied threats to employ coercive state power to stifle protected speech." Hammerhead Enters., Inc. v. Brezenoff, 707 F.2d 33, 39 (2d Cir. 1983); see O'Handley, 62 F.4th at 1158 (holding that Twitter's compliance with a particularized government "request with no strings attached" was the product of the company's "own independent judgment").
3
CHD also advances a hybrid theory of joint action and coercion that focuses on section 230 of the Communications Decency Act, 47 U.S.C. § 230. That provision states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Id. § 230(c)(1). It also immunizes providers of interactive computer services from civil liability for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." Id. § 230(c)(2)(A).
The immunity from liability conferred by section 230 is undoubtedly a significant benefit to companies like Meta that operate social media platforms. It might even be the case that such platforms could not operate at their present scale without section 230. But many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government. If that were enough for state action, every large government contractor would be a state actor. But that is not the law.
CHD seeks to analogize section 230 to the regulatory scheme that the Supreme Court considered in Skinner v. Railway Labor Executives' Ass'n, but the analogy is inapt. 489 U.S. 602 (1989). That case involved Federal Railroad Administration (FRA) regulations authorizing railroads to conduct drug tests on employees suspected of violating certain safety rules. Id. at 606, 611. The FRA argued that because the regulations merely permitted testing, but did not require it, the tests would not constitute state action and would not implicate the Fourth Amendment. Id. at 614. The Court rejected that argument, reasoning that the "specific features of the regulations" demonstrated that "the Government did more than adopt a passive position toward the underlying private conduct." Id. at 615. In particular, the regulations preempted state laws, superseded "any provision of a collective bargaining agreement," and prohibited a railroad from "divest[ing] itself of" or "otherwise compromis[ing] by contract" the ability to conduct the tests. Id. (citation omitted). In addition, the regulations gave the FRA "the right to receive certain biological samples and test results"-the "fruits" of the searches-and mandated that any employee who refused to undergo a test "be withdrawn from covered service." Id. Those features, considered together, made the Court "unwilling to accept" the argument that any searches would be "primarily the result of private initiative." Id.
CHD argues that because the immunity in section 230, like the regulatory regime in Skinner, "removed all legal barriers" to the censorship of vaccine-related speech, Meta's restriction of that content should be considered state action. See 489 U.S. at 615. But section 230 is fundamentally unlike the regulations in Skinner. The statute is entirely passive-a provider can leave content on its platform without worrying that the speech of the poster will be imputed to it, or it may choose to restrict content it considers "objectionable" without the threat of lawsuits. Significantly, in Skinner, the removal of "legal barriers" was just one among several facets of the regulatory scheme that the Court cited in finding state compulsion. Id. Under that scheme, the government sought to encourage railroads both to test their employees and to share the "fruits" of those tests with the government. Id. As evidence of such encouragement, the Court noted that the government imposed on railroads a "duty to promote the public safety" and "mandated" that they fulfill that duty by preserving their state-conferred "authority to perform tests." Id. (citation omitted).
Such "indices of the Government's encouragement, endorsement, and participation" to promote particular private conduct are absent here. Id. at 615-16. Section 230 is just as protective of a provider's right to maintain "objectionable" content on its platform as it is of a provider's right to delete such content. The "legislative grace" providers enjoy under Section 230 merely affords them the ability to choose whether to suppress certain third-party speech without risking costly litigation. By giving companies like Meta that freedom, the government has hardly expressed a "strong preference" for the removal of speech critical of vaccines. Id.
It would be exceptionally odd to say that the government, through section 230, has expressed any preference at all as to the removal of anti-vaccine speech, because the statute was enacted years before the government was concerned with speech related to vaccines, and the statute makes no reference to that kind of speech. Rather, as the text of section 230(c)(2)(A) makes clear-and as the title of the statute (i.e., the "Communications Decency Act") confirms-a major concern of Congress was the ability of providers to restrict sexually explicit content, including forms of such content that enjoy constitutional protection. It is not difficult to find examples of Members of Congress expressing concern about sexually explicit but constitutionally protected content, and many providers, including Facebook, do in fact restrict it. See, e.g., 141 Cong. Rec. 22,045 (1995) (statement of Rep. Wyden) ("We are all against smut and pornography ...."); id. at 22,047 (statement of Rep. Goodlatte) ("Congress has a responsibility to help encourage the private sector to protect our children from being exposed to obscene and indecent material on the Internet."); Shielding Children's Retinas from Egregious Exposure on the Net (SCREEN) Act, S. 5259, 117th Cong. (2022); Adult Nudity and Sexual Activity, Meta, https://transparency.fb.com/policies/community-standards/adult-nudity-sexual-activity [https://perma.cc/ SJ63-LNEA] ("We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content."). While platforms may or may not share Congress's moral concerns, they have independent commercial reasons to suppress sexually explicit content. "Such alignment does not transform private conduct into state action." O'Handley, 62 F.4th at 1157.
CHD insists that it "is not arguing that Section 230 turns all content moderation by all websites into state action," but rather that "Section 230(c)(2), in combination with . . . sustained federal pressure" and "statements of strong preference" and "encouragement," turns Meta's handling of vaccine-related content into state action. As we have explained, those statements and requests do not establish either coercion or joint action. That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation-whether because it shares the government's health concerns or for independent commercial reasons-does not transform Meta's choice into state action.
If we were to accept CHD's argument, it is difficult to see why would-be purveyors of pornography would not be able to assert a First Amendment challenge on the theory that, viewed in light of section 230, statements from lawmakers urging internet providers to restrict sexually explicit material have somehow made Meta a state actor when it excludes constitutionally protected pornography from Facebook. So far as we are aware, no court has ever accepted such a theory.
* * *
CHD's inability to establish state action is fatal to all of its First Amendment claims-for damages under Bivens, for declaratory relief, and for an injunction. And to the extent that CHD continues to argue on appeal that Meta's disabling of its donation button was a "taking" under the Fifth Amendment, that claim fails for the same reason.
Meta identifies several other hurdles to CHD's damages claims. For example, it argues that CHD cannot hold Meta, a private corporation, liable under Bivens, see Correctional Servs. Corp. v. Malesko, 534 U.S. 61, 71 (2001); that CHD has not adequately alleged that Zuckerberg was personally involved in any alleged constitutional violation, see Iqbal, 556 U.S. at 677; and that it would be inappropriate to extend Bivens to this context, see Egbert v. Boule, 596 U.S. 482, 491-93 (2022); Pettibone v. Russell, 59 F.4th 449, 454-55 (9th Cir. 2023). Because the state-action inquiry resolves all of the constitutional causes of action, we need not reach those issues.
Our decision should not be taken as an endorsement of Meta's policies about what content to restrict on Facebook. It is for the owners of social media platforms, not for us, to decide what, if any, limits should apply to speech on those platforms. That does not mean that such decisions are wholly unchecked, only that the necessary checks come from competition in the market-including, as we have seen, in the market for corporate control. If competition is thought to be inadequate, it may be a subject for antitrust litigation, or perhaps for appropriate legislation or regulation. But it is not up to the courts to supervise social media platforms through the blunt instrument of taking First Amendment doctrines developed for the government and applying them to private companies. Whether the result is "good or bad policy," that limitation on the power of the courts is a "fundamental fact of our political order," and it dictates our decision today. Lugar, 457 U.S. at 937.
III
CHD claims that "the warning label and fact-checks" Meta placed on its posts violated the Lanham Act. The district court dismissed that claim because it held that (1) CHD's alleged injuries did not fall within the zone of interests that the Lanham Act protects and (2) the factchecking labels were not statements made in "commercial advertising or promotion." We agree with the district court on the latter ground, so we need not reach the former.
As relevant here, the Lanham Act provides a cause of action against any person who, "in commercial advertising or promotion, misrepresents the nature, characteristics, qualities, or geographic origin of his or her or another person's goods, services, or commercial activities." 15 U.S.C. § 1125(a)(1)(B). We have defined "commercial advertising or promotion" to encompass "(1) commercial speech, (2) by a defendant who is in commercial competition with [the] plaintiff, (3) for the purpose of influencing consumers to buy [the] defendant's goods or services, . . . (4) that is sufficiently disseminated to the relevant purchasing public." Ariix, LLC v. NutriSearch Corp., 985 F.3d 1107, 1115 (9th Cir. 2021). "Commercial speech," we have explained, generally refers to speech that "'does no more than propose a commercial transaction.'" Id. (quoting United States v. United Foods, Inc., 533 U.S. 405, 409 (2001)).
By that definition, Meta did not engage in "commercial speech"-and, thus, was not acting "in commercial advertising or promotion"-when it labeled some of CHD's posts false or directed users to fact-checking websites. Meta's commentary on CHD's posts did not represent an effort to advertise or promote anything, and it did not propose any commercial transaction, even indirectly.
In arguing to the contrary, CHD emphasizes that we have looked to the "economic motivation" of the speaker in assessing whether speech is commercial in nature. Ariix, 985 F.3d at 1116 (quoting Hunt v. City of Los Angeles, 638 F.3d 703, 715 (9th Cir. 2011)). According to CHD, Meta placed labels on its posts in order to "promot[e]" Meta's factcheckers, who compete with CHD "in the nonprofit health information market." It also says that Meta sought to factcheck CHD's posts to ensure that it continued to receive advertising revenue from vaccine manufacturers and to dissuade lawmakers from repealing section 230-"which is worth billions of dollars" to Meta. But economic motivation is a factor we consider "[w]here the facts present a close question," which the facts here do not. Hunt, 638 F.3d at 715; see Dex Media W., Inc. v. City of Seattle, 696 F.3d 952, 960 (9th Cir. 2012) (explaining that an "economic motive in itself is insufficient to characterize a publication as commercial"). More importantly, the economic-motivation test asks "whether the speaker acted primarily out of economic motivation, not simply whether the speaker had any economic motivation." Ariix, 985 F.3d at 1116. As we have explained, "[a] simple profit motive to sell copies of a publication or to obtain an incidental economic benefit, without more, does not make something commercial speech. Otherwise, virtually any newspaper, magazine, or book for sale could be considered a commercial publication." Id. at 1117.
Under any of CHD's theories, the allegations suggest at most that Meta acted with an economic motivation "to obtain an incidental economic benefit." Ariix, 985 F.3d at 1117. As described in the complaint, Meta's economic interests are far too remote from the challenged speech for it to be plausible that "the economic benefit was the primary purpose for speaking." Id. The district court therefore correctly concluded that the complaint did not state a claim under the Lanham Act.
IV
CHD also asserts a civil RICO claim. RICO makes it a crime for a person employed by or associated with an enterprise to conduct or participate in the conduct of the enterprise's affairs through a pattern of racketeering activity, and it allows "[a]ny person injured in his business or property by reason of a violation" to bring a civil damages action. 18 U.S.C. § 1964(c); see id. § 1962(c). As relevant here, the "racketeering activity" covered by RICO includes "any act which is indictable under" the federal wire fraud statute, 18 U.S.C. § 1343. Id. § 1961(1)(B). That statute, in turn, prohibits the use of electronic communications for the purpose of executing "any scheme or artifice to defraud, or for obtaining money or property by means of false or fraudulent pretenses, representations, or promises." Id. § 1343.
To survive a motion to dismiss, CHD needed to plead facts that would support a plausible inference that the defendants had engaged in a scheme or artifice to defraud and that it suffered injury by reason of that scheme. It did not do so. In the complaint, CHD described a scheme whereby Meta placed warning labels on CHD's posts with the intent to "clear the field" of CHD's alternative point of view, thus keeping vaccine manufacturers in business so that they would buy ads on Facebook and ensure that Zuckerberg obtained a return on his investments in vaccine technology. CHD has now abandoned that theory and instead focuses on a different theory that it advanced for the first time in response to the motion to dismiss. Under that theory, the object of the scheme was "to deceive visitors to CHD's Facebook page into giving their charitable dollars not to CHD, but to other, competing nonprofit organizations." The district court might have deemed that theory to be forfeited, but because it addressed the theory on the merits, we will do so as well.
CHD emphasizes that a RICO plaintiff alleging fraud need not show that it relied on any false statements by the defendant but can in some cases allege that the defendant harmed it by deceiving third parties. For example, in Bridge v. Phoenix Bond &Indemnity Co., the Supreme Court held that losing bidders in a tax-lien auction could bring a RICO action against rival bidders who engaged in a fraudulent scheme to win auctions by deceiving the seller. 553 U.S. 639, 649-50 (2008). The Court offered an example to illustrate the point: "[S]uppose an enterprise that wants to get rid of rival businesses mails misrepresentations about them to their customers and suppliers .... If the rival businesses lose money as a result of the misrepresentations, it would certainly seem that they were injured in their business 'by reason of' a pattern of mail fraud." Id. (quoting 18 U.S.C. § 1964(c)).
The rule in Bridge does not help CHD because the key deficiency in CHD's claims of wire fraud is not that the alleged deception targeted third parties; it is the disconnect between the alleged deception and the asserted injury to CHD. The statutory phrase "by reason of" requires proximate cause. Holmes v. Securities Inv. Prot. Corp., 503 U.S. 258, 268 (1992). For RICO purposes, that means the plaintiff must allege "some direct relation between the injury asserted and the injurious conduct alleged." Hemi Grp., LLC v. City of New York, 559 U.S. 1, 9 (2010) (quoting Holmes, 503 U.S. at 268). For example, in Bridge, the bidders adequately pleaded proximate cause because their auction losses were the "direct result" of the alleged fraud. 553 U.S. at 658. The rival bidders deceived the seller about the share of tax liens for which they were eligible, thereby reducing the losing bidders' share. Id. at 643-44, 658. "[N]o independent factors" accounted for the plaintiffs' loss. Id. at 658.
More recently, in Hemi Group, LLC v. City of New York, the Supreme Court held that a RICO plaintiff failed to plead proximate cause where a direct link was lacking. 559 U.S. at 10. The City claimed that a cigarette vendor committed fraud by neglecting to file required reports listing its purchasers with the State, obstructing the City's efforts to collect taxes from the unidentified purchasers. Id. at 5-6. The Court rejected the claim because the conduct "directly responsible" for the City's injury-the purchasers' failure to pay taxes- was "distinct from the conduct giving rise to the fraud"-the vendor's failure to file the purchaser reports. Id. at 11; see Anza v. Ideal Steel Supply Corp., 547 U.S. 451, 458 (2006). Unlike in Bridge, the losses to the City flowed from the "independent actions" of purchasers to withhold the taxes they owed. Hemi Grp., 559 U.S. at 15.
The causal chain that CHD proposes is, to put it mildly, indirect. CHD contends that Meta deceived Facebook users who visited CHD's page by mislabeling its posts as false. The labels that Meta placed on CHD's posts included links to fact-checkers' websites. If a user followed a link, the factchecker's website would display an explanation of the alleged falsity in CHD's post. On the side of the page, the fact-checker had a donation button for the organization. Meanwhile, Meta had disabled the donation button on CHD's Facebook page. If a user decided to donate to the fact-checking organization, CHD maintains, that money would come out of CHD's pocket, because CHD and factcheckers allegedly compete for donations in the field of health information.
The alleged fraud-Meta's mislabeling of CHD's posts-is several steps removed from the conduct directly responsible for CHD's asserted injury: users' depriving CHD of their donation dollars. At a minimum, the sequence relies on users' independent propensities to intend to donate to CHD, click the link to a fact-checker's site, and be moved to reallocate funds to that organization. This causal chain is far too attenuated to establish the direct relationship that RICO requires. Proximate cause "is meant to prevent these types of intricate, uncertain inquiries from overrunning RICO litigation." Anza, 547 U.S. at 460.
CHD's theory also strains credulity. It is not plausible that someone contemplating donating to CHD would look at CHD's Facebook page, see the warning label placed there, and decide instead to donate to . . . a fact-checking organization. See Twombly, 550 U.S. at 555. The district court noted that CHD did not allege that any visitors to its page had in fact donated to other organizations because of Meta's fraudulent scheme. CHD is correct that an actual transfer of money or property is not an element of wire fraud, as "[t]he wire fraud statute punishes the scheme, not its success." Pasquantino v. United States, 544 U.S. 349, 371 (2005) (alteration in original) (quoting United States v. Pierce, 224 F.3d 158, 166 (2d Cir. 2000)). But the fact that no donations were diverted provides at least some reason to think that no one would have expected or intended the diversion of donations.
If that were not enough, the Supreme Court has cautioned us to ensure that fraud offenses be defined "with sufficient definiteness that ordinary people can understand what conduct is prohibited and . . . in a manner that does not encourage arbitrary and discriminatory enforcement." Skilling v. United States, 561 U.S. 358, 402-03 (2010) (quoting Kolender v. Lawson, 461 U.S. 352, 357 (1983)); see McDonnell v. United States, 579 U.S. 550, 576 (2016) (noting a due-process concern with the prospect of "prosecution, without fair notice, for the most prosaic interactions"). In seeking to hold the defendants liable for statements on matters of public concern, CHD ignores that caution. For example, under CHD's view, it would seem that a political party could bring a RICO claim against a rival political party on the theory that its allegedly false statements were part of a fraudulent scheme to divert political contributions from the plaintiff party to its rival. Such an application of the fraud statutes would raise serious First Amendment concerns. We reject CHD's invitation to construe fraud so broadly.
V
We affirm the district court's dismissal of CHD's claims against Science Feedback for insufficient service of process. Although the dismissal was without prejudice, we have jurisdiction over CHD's appeal. Unlike a dismissal with leave to amend, which permits further proceedings and therefore is not final, see WMX Techs., Inc. v. Miller, 104 F.3d 1133, 1136 (9th Cir. 1997) (en banc), the dismissal here means that the case "is over as far as the district court is concerned," so it is final and appealable, De Tie v. Orange County, 152 F.3d 1109, 1111 (9th Cir. 1998); see Constien v. United States, 628 F.3d 1207, 1210 (10th Cir. 2010) ("[D]ismissal without prejudice for failure of service is a dismissal of the action and not just the complaint because no amendment of the complaint could cure the defect.").
We review the district court's assessment of the adequacy of service for abuse of discretion, and we find none. See Rio Props., Inc. v. Rio Int'l Interlink, 284 F.3d 1007, 1014 (9th Cir. 2002). CHD made two efforts to serve Science Feedback, but both were unsuccessful. It then asked the district court to let it serve Meta's counsel instead, arguing that the contractual relationship between Meta and Science Feedback made such service appropriate. The district court denied that motion but stated that CHD could renew it if further efforts to serve proved ineffective. CHD never did so. Although it made another unsuccessful attempt at service, it did nothing else until after the district court entered judgment.
CHD argues that Science Feedback has had actual notice of this litigation, but that is not a substitute for service of process under Federal Rule of Civil Procedure 4. See Jackson v. Hayakawa, 682 F.2d 1344, 1347 (9th Cir. 1982). It also contends that the district court should have applied Rule 4(m), which requires that a court provide notice to the plaintiff before dismissing the action on its own motion when service has not been timely made. But Science
Feedback is domiciled in France, and by its terms, Rule 4(m) "does not apply to service in a foreign country."
The motions for judicial notice (Dkt. Nos. 64, 70, 78, 86, 92) are DENIED.
AFFIRMED.
COLLINS, Circuit Judge, concurring in part, concurring in the judgment in part, and dissenting in part:
I believe that Children's Health Defense ("CHD") has shown that it could plausibly allege a First Amendment claim for injunctive relief against Defendant Meta Platforms Inc. ("Meta"), and I therefore respectfully dissent from the majority's contrary conclusion. However, I agree that all of CHD's other claims were properly dismissed, and I therefore concur in the judgment as to those remaining claims and in Parts III, IV, and V of the majority opinion.
Meta was known as "Facebook, Inc." until October 2021. For convenience, I will refer to it consistently as "Meta," even with respect to events before that date.
I A
Before sketching the facts that I take as true for purposes of this appeal, I first address an important threshold question concerning what factual allegations we may properly consider.
Because this appeal arises from a district court order granting a motion to dismiss for failure to state a claim, we must take the well-pleaded allegations of the operative complaint as true and draw all reasonable inferences in favor of CHD. Shields v. Credit One Bank, N.A., 32 F.4th 1218, 1220 (9th Cir. 2022). We must "likewise take as true for purposes of this appeal the additional well-pleaded contentions" contained in any materials that were submitted to the district court as reflecting the substance of a proposed amendment to the complaint. Broidy Cap. Mgmt., LLC v. State of Qatar, 982 F.3d 582, 586 (9th Cir. 2020).
However, CHD has also submitted certain additional materials for the first time in this court, and the parties sharply disagree as to whether, and to what extent, we may consider these materials in assessing the legal sufficiency of CHD's claims. Under the unique circumstances of this case, I agree with CHD that we should take judicial notice of the existence of certain new, highly relevant documents that have only recently become available and that, like the materials submitted by CHD to the district court, effectively reflect specific additional factual allegations that CHD proposes to plead if it is given leave to amend on a remand.
I recognize that, on appeal from the dismissal of a complaint, a plaintiff generally cannot suggest new grounds for amending the complaint for the first time in this court. See, e.g., Riggs v. Prober &Raphael, 681 F.3d 1097, 1104 (9th Cir. 2012). However, CHD does not purport to add wholly new legal theories or claims, but rather only additional factual allegations in support of its existing claims. Moreover, its newly suggested amendments are limited to factual allegations based on documents that were concededly unavailable to CHD at the time of the district court proceedings and that have only become subsequently available through compulsory processes employed in other litigation or in legislative investigations or through Freedom of Information Act requests. We can take judicial notice of the limited fact that these documents exist and have become available to CHD during the course of this appeal. See Lee v. City of Los Angeles, 250 F.3d 668, 690 (9th Cir. 2001). While their contents cannot be judicially noticed for their truth, see id., CHD may properly draw on them in sketching the additional factual allegations that it could now make if it is given leave to amend.
I disagree with Meta's suggestion that this court's only procedurally proper option is to order a limited remand to the district court so that that court can first consider the wholly legal question of the viability of CHD's claims in light of these new potential allegations, after which we would then review the matter de novo. Cf. FED. R. CIV. P. 62.1; FED. R. APP. P. 12.1. While we could certainly insist that CHD proceed in that fashion, it is within our discretion, under these unique circumstances, to simply consider the legal sufficiency of such additional allegations ourselves in evaluating whether CHD can state a claim on which relief may be granted. The limited remand suggested by Meta here would be a pointless waste of time and judicial resources and would needlessly further postpone our obligation to decide the novel, difficult, and important legal questions raised by this appeal. In my view, given the weighty First Amendment interests at stake in this case and the considerable difficulties inherent in attempting to uncover facts concerning alleged behind-the-scenes interactions between Meta and Government personnel, we should exercise our discretion in favor of considering the significance of the additional allegations CHD could make in light of these newly available documents.
B
With these principles in mind, I take the following factual contentions as true for purposes of this appeal from the district court's order dismissing CHD's claims at the pleadings stage.
1
CHD is a Georgia-based non-profit membership organization founded in 2015 by Robert F. Kennedy, Jr. ("Kennedy"), who remains its chairman. Its professed mission is "to educate the public about the risks and harmful effects of chemical exposures upon prenatal and children's health, including from particular vaccines and environmental health hazards, such as 5G and wireless networks and products, and to advocate for social change both legislatively and through judicial action." "CHD's primary sources of revenue derive from membership dues and donations that CHD solicits on its website and, formerly, on its Facebook page."
With respect to vaccines, "CHD advocates for open and honest public debate on the efficacy and safety of the . . . entire Child and Adolescent Immunization Schedule" of the Centers for Disease Control and Prevention ("CDC"). CHD is sharply critical of the CDC's vaccine policies, stating on its website that "the CDC has become a mouthpiece for [the pharmaceutical] industry and has protected the 'all vaccines for all children' policy despite peer-reviewed science to the contrary." Indeed, CHD argues that the CDC has become so plagued by conflicts of interest that the subject of "vaccine safety should be taken from the CDC" altogether. CHD's website contains links to numerous articles concerning vaccines and other topics, including both advocacy pieces and scientific studies from "peer-reviewed, published journals."
Meta is a California-based corporation that operates, among other things, two large social media platforms, namely, Facebook and Instagram. According to the operative complaint, Facebook has "214 million users in the United States and 2.2 billion worldwide." Mark Zuckerberg is Meta's co-founder and its chairman, CEO, and controlling shareholder. As relevant here, Facebook enables users to create webpages on which they can share information, engage in advocacy, and solicit donations. Facebook users can also choose to "follow" other users' Facebook pages that are of interest to them. In November 2017, CHD agreed to Facebook's terms of service and created its own Facebook page. By late 2020, CHD's Facebook page, which it used to promote its views on vaccines and other matters, had more than 122,000 followers.
2
CHD alleges that, even before the Covid pandemic and the development of Covid vaccines, CHD's general advocacy concerning vaccine safety drew the attention of Government officials, who sought to pressure Meta to delete, or at least to reduce the visibility of, what those officials contended was "vaccine misinformation." In particular, CHD points to a February 14, 2019 letter from Congressman Adam Schiff to Meta asking it to identify what measures it currently took to address "misinformation related to vaccines on [its] platforms" and "encourag[ing] [it] to consider what additional steps [it] can take to address this growing problem." CHD alleges that, while ostensibly a strictly informational inquiry, Congressman Schiff's letter must be understood against a larger backdrop in which various legislators, through hearings and public statements, sought to pressure social media companies to delete or restrict a variety of different categories of disfavored content. CHD points, in particular, to April 2019 public remarks by House Speaker Nancy Pelosi in which she raised the possibility of removing the immunity for hosting third-party content that is granted to social media platforms by § 230 of the Communications Act of 1934. In those remarks, the Speaker noted that, when the subject of § 230 is raised with social media companies, "you really get their attention," and she stated that it was "not out of the question" that § 230's immunity "could be removed" by Congress. As she explained, "for the privilege of 230, there has to be a bigger sense of responsibility" on the part of social media companies.
I discuss in detail below the nature of this immunity and the critical practical role it plays in making possible the sorts of gigantic platforms operated by Meta. See infra section III.
Three weeks after Congressman Schiff's letter, Meta announced it was taking a variety of steps to reduce the visibility of "misinformation about vaccinations." On April 26, 2019, Meta also announced that it would remove "fundraising tools" from "Pages that spread misinformation about vaccinations on Facebook." In accordance with that policy, Meta deactivated the fundraising function on CHD's Facebook page six days later. Around the same time, Meta "permanently disabled the 'dispute' function on CHD's account so that neither CHD" nor Kennedy "could challenge," "through direct submission," Meta's actions against CHD. Although CHD and Kennedy, of course, could still send "written requests" to Meta objecting to its actions, Meta consistently "ignored" these requests. Meta also took steps to block the posting of some vaccine-related content, including on CHD's Facebook page. For example, on June 9, 2019, Meta blocked CHD from displaying, on its Facebook page, a videotape of an interview in which Kennedy "discuss[ed] a pending lawsuit against Merck &Co." concerning its Gardasil vaccine. On September 4, 2019, Meta also added warnings directly onto CHD's Facebook page, stating that "[t]his Page posts about vaccines" and that those who want "reliable, up-to-date information" about vaccines should "[g]o to CDC.gov," the webpage of the CDC.
3
After the growing Covid pandemic resulted in widespread lockdowns and societal disruption beginning in March 2020, many public officials began to express a focused concern over Covid-related "misinformation" on social media. For example, in early June 2020, the House Speaker sharply criticized social media platforms for failing to halt the spread of "COVID-19 disinformation." Later that month, two subcommittees of the House Committee on Energy and Commerce held a joint hearing on "Disinformation Online." In his opening remarks, the chairman of one subcommittee stated that social media platforms had "become awash in disinformation," including "lies about COVID 19." He stated that the "status quo is unacceptable," and that, "[w]hile Section 230 has long provided online companies the flexibility and liability protections they need to innovate and to connect people from around the world, it has become clear that reform is necessary if we want to stem the tide of disinformation rolling over our country." The chair of the other subcommittee stated, in her opening remarks, that "Section 230" had come to effectively "protect[] business models that generate profits off scams [and] fake news"; that this was "never the intent" of Congress; that, "since both courts and industry refuse to change, Congress must act"; and that she "look[ed] forward to working with [her] colleagues to modernize Section 230."
In September 2020, Zuckerberg stated in an interview that Meta was actively working with the CDC and the World Health Organization ("WHO") "to remove clear misinformation about health-related issues that could cause an imminent risk of harm." In October 2020, Zuckerberg and the then-CEOs of Twitter, Inc. and Alphabet Inc. (which operates the various Google products) were subpoenaed to testify at an October 28 hearing of the Senate Commerce Committee entitled, "Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior?"
In early 2021, as Covid vaccines were first becoming widely available, various Executive Branch officials took steps to address a specific concern about what they considered to be "misinformation" about these new Covid vaccines, as well as about Covid more generally. These officials included Robert Flaherty, Deputy Assistant to the President and White House Director of Digital Strategy, and Andrew Slavitt, who served as a White House Senior Advisor for the COVID-19 Response. Shortly after joining the new Administration, Flaherty reached out to Meta to inquire about its policies concerning Covid-related information on its platforms. On February 9, 2021, Meta responded by email to Flaherty with its "responses to [his] initial questions." In response to Flaherty's specific inquiry as to how Meta handled Covid-related claims "that are dubious, but not provably false," Meta stated that, while its practice was to "remove claims public health authorities tell us have been debunked or are unsupported by evidence," it also took measures to limit the distribution of content that "contributes to unfounded hesitancy towards the COVID-19 vaccine," even if such content "does not qualify for removal" (emphasis added). Meta also stated that, where information did warrant removal under its policies, "multiple violations" would lead to restrictions on the relevant Facebook account, including "suspend[ing] the entire Page, Group, or account." The email assured Flaherty that Meta "will begin enforcing this policy immediately." The next day, February 10, Meta took down Kennedy's Instagram account.
In a March 21, 2021 email to Slavitt, Meta confirmed that it would make a specific named employee "available on a regular basis" to interface with the White House, noting that the employee had already "been coordinating the product work that matters most to your teams." In that same email, Meta confirmed that, in response to Slavitt's prior inquiry about the available "levers for reducing virality of vaccine hesitancy content," Meta would make "additional changes that were approved late last week and that [it would] be implementing over the coming weeks." These included "reducing the virality of content discouraging vaccines that does not contain actionable misinformation" and removing "Groups, Pages, and Accounts" that posted vaccine-related content that, while truthful, was "sensationalized."
Four days later, two subcommittees of the House Energy and Commerce Committee again held a joint hearing on "disinformation" on social media platforms, and Zuckerberg and the CEOs of Alphabet and Twitter all testified at this hearing. In his opening remarks, one of the subcommittee chairs stated that he was concerned about, among other things, "antivaxxers, COVID deniers, QAnon supporters, and Flat earthers." Disinformation Nation: Social Media's Role in Promoting Extremism and Misinformation, Virtual Joint Hearing Before the Subcomm. on Commc'ns & Tech. &the Subcomm. on Consumer Prot. & Com. of the H. Comm. on Energy & Com., 117th Cong., SERIAL NO. 11719, at 2 (Mar. 25, 2021). The chairman of the full committee, in his opening statement, stated that "it is now painfully clear that neither the market nor public pressure will force these social media companies to take the aggressive action they need to take to eliminate disinformation and extremism from their platforms" and that "therefore, it is time for Congress and this committee to legislate and realign these companies' incentives." Id. at 12.
On April 13, 2021, Meta emailed Flaherty and Courtney Rowe, another White House official, to follow up concerning various questions they had raised about Meta's treatment of posts that might promote vaccine hesitancy. Attached to this email were "Vaccine Hesitancy Examples," including one specifically from CHD's Facebook page. The email defined "vaccine hesitancy" content as including, inter alia, truthful content that "discuss[es] the choice to vaccinate in terms of personal and civil liberties or concerns related to mistrust in institutions or individuals." Meta explained that it "utilize[s] a spectrum of levers for this kind of content," including "reducing the posts' distribution, not suggesting the posts to users, [and] limiting their discoverability in Search."
On May 6, 2021, Flaherty emailed Meta to complain about the inadequacy of Meta's efforts to demote truthful vaccine-hesitant content. He specifically complained that Meta's vaccine hesitancy policy was not "stopping the disinfo dozen"-a group of 12 individuals, including Kennedy, who were identified as spreading Covid "misinformation" online. On May 12, Flaherty followed up and complained that, as compared with other social media platforms, he thought that Meta was doing an inadequate job in downgrading "anti-vaccine" content. He elaborated:
But "removing bad information from search" is one of the easy, low-bar things you guys do to make people like me think you're taking action. If you're not getting that right, it raises even more questions about the higher bar stuff....
Youtube, for their warts, has done pretty well at promoting authoritative info in search results while keeping the bad stuff off of those surfaces. Pinterest doesn't even show you any results other than official information when you search for "vaccines." I don't know why you guys can't figure this out.
Meta's interactions with the Government extended beyond the White House. In particular, on June 1, 2021, Meta emailed the CDC, explaining that it had established a "misinfo claims portal" in which selected CDC personnel who had been "whitelisted" for access to the portal could submit requests to have particular posts taken down from Facebook. The cover email explained that Meta wanted to ensure that "everyone who had been whitelisted" had "all the info they need to start submitting claims." The email included an attached file explaining, in a set of slides, how the "Facebook Content Request System: Government Reporting System" worked. An authorized CDC employee would use the designated URL for accessing the system- www.facebook.com/xtakedowns/login-and then enter his or her credentials. Once the user was logged into the system, he or she could select from a menu of pre-programmed reasons for requesting removal of the offending content, such as "Covid Misinformation," "Vaccine Discouragement," and "Covid Vaccine Misinformation." After selecting from among these options, the user would be directed to submit "the relevant violating URLs" that the user wanted taken down, up to a maximum of 20 in a single report. After submitting the request, the user would receive a confirmation email with a reference number to allow for tracking and follow-up.
On July 15, 2021, Surgeon General Vivek Murthy and White House Press Secretary Jen Psaki appeared at a joint press briefing concerning the Government's response to Covid. Among other things, Surgeon General Murthy asked social media companies "to consistently take action against misinformation super-spreaders on their platforms." In an apparent specific reference to the so-called disinformation dozen-which specifically includes Kennedy-Psaki referred to "12 people who are producing 65 percent of antivaccine misinformation on social media platforms." In an apparent reference to Kennedy-who had been banned from Meta's Instagram platform-Psaki noted that these 12 "remain[ed] active on Facebook, despite some even being banned on other platforms" that "Facebook owns."
The next day, July 16, Meta employees had a call with the Surgeon General's office to discuss Meta's actions against "health misinformation." During the call, Meta specifically touted its earlier enforcement against Kennedy, claiming that, after he was banned from Instagram, "[h]e then stopped posting on [Facebook] about vaccines at all."
One week later, on July 23, Meta sent an email to various persons in the Department of Health and Human Services ("HHS"), following up on a meeting with them earlier that day to "take stock after the past week." In particular, the email summarized "steps taken to further address the 'disinfo dozen,'" including removing 17 additional "Pages, Groups, and [Instagram] accounts tied to the disinfo dozen," with the result that "every member of the disinfo dozen . . . had at least one such entity removed." Meta reiterated that it had heard HHS's "call for us to do more," and it said that it would reach out "to schedule the deeper dive on how to best measure Covid related content." Meta underscored that it and HHS had "a strong shared interest to work together" and that it would "strive to do all [it] can to meet our shared goals."
On August 6, 2021, Meta employees stated, in an internal email, that Meta was moving forward with a number of recommendations for dealing with posts with Covid-related "misinfo" or posts that were "misinfo adjacent." The first, listed as "Option 1a", was to remove "assets linked to Groups / Pages / Profiles / Accounts that have been removed for COVID misinfo violations" from users' recommendations. As an example, Meta stated "RFK Jr.'s [Instagram] Account is removed, so his [Facebook] Page will be non-recommendable." This option was recommended as a "stop-gap measure specifically targeting Disinfo Dozen assets." On August 17, 2022, Meta ultimately removed CHD from its Facebook and Instagram platforms entirely.
C
CHD filed this action in August 2020. On December 4, 2020, CHD filed its second amended complaint, alleging violations of the First and Fifth Amendments, along with violations of the Racketeer Influenced and Corrupt Organizations Act ("RICO") and the Lanham Act. The complaint named as Defendants Meta, Zuckerberg, and two of Meta's so-called "fact-checking" organizations, Science Feedback and the Poynter Institute for Media Studies, Inc. ("Poynter Institute"). Seeking declaratory, injunctive, and monetary relief, CHD alleged, inter alia, that Defendants were either working in concert with, or under compulsion from, the Federal Government to suppress CHD's speech on Meta's platforms and to prevent CHD from fundraising on those platforms.
On June 29, 2021, the district court dismissed the operative complaint, and the court entered final judgment the next day. The district court dismissed Science Feedback without prejudice for lack of service, and it dismissed the remaining Defendants with prejudice for failure to state a claim upon which relief may be granted. See FED. R. CIV. P. 12(b)(6). With respect to CHD's constitutional claims, the district court held that CHD had failed to allege sufficient facts to raise a plausible inference that Meta or Zuckerberg had "worked in concert with the CDC to censor CHD's speech, retaliate against CHD, or otherwise violate CHD's constitutional rights," nor had CHD "alleged facts showing government coercion sufficient to deem [Meta] or Zuckerberg a federal actor." Absent state action, the district court held, any constitutional claims necessarily failed. The district court dismissed the remaining claims in the case on a variety of other grounds.
CHD timely appealed the district court's dismissal of its suit. We have jurisdiction under 28 U.S.C. § 1291.
II
I agree that CHD cannot assert a Bivens claim against Defendants for monetary damages based on alleged violations of its First Amendment rights. See Egbert v. Boule, 596 U.S. 482, 498-99 (2022) (generally declining to extend a Bivens remedy "to alleged First Amendment violations"). But as we have recognized, claims for injunctive and declaratory relief, unlike claims for damages, do not rely on the Bivens cause of action. See Ministerio Roca Solida v. McKelvey, 820 F.3d 1090, 1094 (9th Cir. 2016) ("The only remedy available in a Bivens action is an award for monetary damages from defendants in their individual capacities." (citation omitted)). Rather, "[t]he ability to sue to enjoin unconstitutional actions by state and federal officers is the creation of courts of equity, and reflects a long history of judicial review of illegal executive action, tracing back to England." Armstrong v. Exceptional Child Ctr., Inc., 575 U.S. 320, 327 (2015). Indeed, in Correctional Services Corp. v. Malesko, 534 U.S. 61 (2001), the Supreme Court declined to recognize a Bivens remedy against "a private corporation operating a halfway house under contract with the Bureau of Prisons," but it then went on to note that "suits in federal court for injunctive relief" remained available, because "injunctive relief has long been recognized as the proper means for preventing entities"- presumably including private parties that qualify as state actors-"from acting unconstitutionally." Id. at 63, 74; see also Lima v. U.S. Dep't of Educ., 947 F.3d 1122, 1127-28 &n.6 (9th Cir. 2020) (rejecting, based on Malesko, a Bivens damages claim against the private corporate defendant, but then rejecting on the merits the plaintiff's claim for equitable relief against that defendant as an alleged state actor acting unconstitutionally).
Of the various grounds for dismissal given by the district court, the only one that would support rejecting CHD's claim for injunctive and declaratory relief concerning alleged First Amendment violations is the district court's conclusion that Meta and Zuckerberg were not state actors for purposes of the First Amendment. As the Supreme Court has squarely held, the First Amendment's "Free Speech Clause prohibits only governmental abridgment of speech" and "does not prohibit private abridgment of speech." Manhattan Cmty. Access Corp. v. Halleck, 587 U.S. 802, 808 (2019). To qualify as a constitutional violation, therefore, a particular deprivation of a constitutional right must be "fairly attributable to the State." Lugar v. Edmondson Oil Co., 457 U.S. 922, 937 (1982); see also Lindke v. Freed, 601 U.S. 187, 198 (2024). The Court has articulated a number of alternative formulas under which conduct may be fairly attributable to the state, with the details of those various tests reflecting the relevant features of some of the distinct specific contexts in which the state-action question has often arisen. See Lugar, 457 U.S. at 938-39 (noting, for example, the "public function test," the "state compulsion test," the "nexus test," and the "joint action test" (citations and internal quotation marks omitted)). The ultimate inquiry, however, remains whether the challenged acts are "attributable to the State" in the sense that they are "traceable to the State's power or authority." Lindke, 601 U.S. at 198.
The district court dismissed the claims against Poynter solely on the ground that it (like Meta) was a private corporation that, under Malesko, could not be sued in a Bivens action. The district court also recognized, however, that CHD's allegations against Poynter were very limited and that most of the challenged conduct was allegedly committed by Meta. Against this backdrop, I think it is fair to say that the logic of the district court's no-state-action ruling as to Meta and Zuckerberg would necessarily extend to Poynter as well, even if the district court does not itself appear explicitly to have made this point.
Because-as the variety of alternative formulas underscores-the state-action inquiry often depends upon specific features of the context at issue, I think it is important to begin by first setting out in some detail the unique legal context that provides the backdrop for this case. Although the majority deems the entirety of a massive platform such as Facebook as being in all respects the equivalent to a big newspaper, see Opin. at 22, the analogy is not entirely apt. As I shall explain in Section III, Meta's truly gargantuan platforms simply could not exist in anything resembling their current form without the legal immunity that the Federal Government has afforded to internet platforms under § 230 of the Communications Act of 1934, as added by the Communications Decency Act of 1996. Thereafter, in Section IV, I will explain how that context-specific feature factors into the overall question of whether, on the specific facts at issue here, CHD can adequately plead the requisite state action.
III
Meta is in the business of transmitting, on a vast scale, the publicly available speech of others, primarily through its Facebook and Instagram platforms. As quickly became apparent in the early days of the internet, operating any such open platform for the speech of third parties presents very substantial liability risks that, if the platform became large enough, would be practically impossible to manage or to effectively mitigate. Congress's solution was § 230, which we have construed to provide broad immunity to internet platforms hosting third-party speech.
A
In assigning liability for transmitting defamatory communications, the common law generally distinguishes among different classes of persons based on their role in creating, or their knowledge of, the contents of those communications. The greatest level of responsibility applies to the "composer or original publisher of a defamatory statement, such as the author, printer or publishing house," because that person "usually knows or can find out whether a statement in a work produced by him is defamatory or capable of a defamatory import." RESTATEMENT (SECOND) OF TORTS § 581 cmt. c. A lesser degree of responsibility is applied to those, such as newsstands, bookstores, and libraries, who are in the business of distributing large volumes of expressive content prepared exclusively by third parties. Because such persons cannot be expected to screen in advance the content of every book, periodical, and article they distribute, the common law assigns them liability for defamation only if "there are special circumstances that should warn the dealer that a particular publication is defamatory." Id. cmt. d. But no liability for defamation is assigned to a person or entity, such a "telephone company," that "merely makes available to another equipment or facilities that he may use himself for general communication purposes." Id. cmt. b. These three categories of persons have sometimes been respectively referred to as "publishers," "distributors," and "conduits," see, e.g., Austin v. CrystalTech Web Hosting, 125 P.3d 389, 392 (Ariz.Ct.App. 2005); Eugene Volokh, Treating Social Media Like Common Carriers?, 1 J. OF FREE SPEECH L. 377, 455 (2021), and I will use that same shorthand here.
Meta and others operating social media platforms do not fit neatly into this taxonomy. Although in one sense they merely provide "equipment or facilities" that third parties may use "for general communication purposes," the facilities at issue here voluntarily disseminate those communications in many cases to the world at large. Because Meta knows, or can readily know, the content of those communications, and is under no legal obligation to transmit them, it cannot be classified as a mere conduit. Cf. RESTATEMENT (SECOND) OF TORTS § 581 cmt. f (stating that a person or entity (such as a telegraph company) that provides communication-specific assistance in transmitting a particular statement whose contents are known or accessible to it may be liable for defamation if "he knows or has reason to know that the message is libelous"); see also id. § 612 (providing, however, a limited privilege that further limits liability for such transmitters).
In many senses, Meta most resembles a distributor, because it transmits a truly enormous volume of third-party content that could not feasibly be reviewed in advance and that it plays no role in creating. But, as this case illustrates, Meta (like many other platform operators) also manages the content on its websites in ways that arguably go beyond that of a traditional distributor, such as a bookstore or newsstand, and that begin to resemble the actions of a publisher. It is perhaps therefore unsurprising that, even in the early days of the internet, at least one court concluded that a company operating a "computer bulletin board" could be classified as a "publisher," rather than a "distributor," if it "h[o]ld[s] itself out to the public and its members as controlling the content" of that platform and "implement[s] this control through [an] automatic software screening program" or through "Guidelines" that it uses to remove content. Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710, at *1, 4 (N.Y. Sup. Ct. May 24, 1995). By opting for the "benefits" of this heightened degree of "editorial control," Stratton Oakmont held, the platform at issue there "ha[d] opened it up to a greater liability" than a mere distributor. Id. at *5. Such a rule, of course, would likely spell the end of the internet as we know it: for a variety of reasons, virtually no one operating (or using) a platform would want there to be no controls over what content may be posted, but under Stratton Oakmont, the use of such controls could result in crushing publisher-level liability for all third-party content on the platform.
B
Congress promptly acted to resolve this problem by adding a new § 230 to the Communications Act of 1934, which has been classified as § 230 of the unenacted title 47 of the United States Code. Section 230 accomplishes that goal by first establishing certain rules limiting internet platforms' liability for posting or removing third-party content and then expressly preempting any contrary state or local law. See 47 U.S.C. § 230(e)(3) ("No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.").
Although we have frequently referred to the statute as "Section 230 of the Communications Decency Act," Enigma Software Grp. USA, LLC v. Malwarebytes, Inc., 69 F.4th 665, 670 (9th Cir. 2023); see also Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1161 (9th Cir. 2008) (en banc), that is a misnomer. Title V of the Telecommunications Act of 1996 is captioned as the "Communications Decency Act of 1996," see Pub. L. No. 104-104, § 501, 110 Stat. 56, 133 (1996), and § 509 of that title added § 230 to the Communications Act of 1934, which has been classified to the unenacted 47 U.S.C. § 230. See id. § 509, 110 Stat. at 137. The statute can thus properly be referred to either as § 230 of the Communications Act of 1934 or as § 230 of Title 47, but not as § 230 of the Communications Decency Act.
Section 230's general rules for limiting liability are contained in subsection (c), which provides as follows:
(c) Protection for "Good Samaritan" Blocking and Screening of Offensive Material
(1) Treatment of Publisher or Speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil Liability
No provider or user of an interactive computer service shall be held liable on account of-
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical
means to restrict access to material described in paragraph [(A)].47 U.S.C. § 230(c).
The statute actually says "paragraph (1)," but that is obviously a scrivener's error. See U.S. Nat. Bank of Oregon v. Independent Ins. Agents of Am., Inc., 508 U.S. 439, 462 (1993).
In its current form, the statute carves out certain specified exceptions from its limitation on civil liability, such as for conduct that violates "any law pertaining to intellectual property" or that violates certain sextrafficking laws. See 47 U.S.C. § 230(e)(2), (5)(A).
Section 230(c)(1) squarely rejects Stratton Oakmont by flatly providing that no "interactive computer service shall be treated as the publisher or speaker" of third-party content that it hosts or transmits. 47 U.S.C. § 230(c)(1); see also id. § 230(f)(2) (broadly defining an "interactive computer service" as "any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server"). By its terms, this categorical rule applies regardless of whether the platform uses the sort of controls to screen and remove content that were at issue in Stratton Oakmont. But to be sure that platforms would have the ability, inter alia, to use "blocking and filtering technologies that empower parents to restrict their children's access to objectionable or inappropriate online material," id. § 230(b)(4) (declaring the statute's "policy"), § 230 goes further and prohibits platforms from being held liable "on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." Id. § 230(c)(2)(A).
This court has construed the resulting immunity conferred by § 230 very broadly. We have held that subsection (c)(1)'s rule that a platform operator shall not be "treated as the publisher or speaker of any information provided" by a third party does not merely prohibit the sort of publisher-liability that was at issue in Stratton Oakmont. See Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1104 (9th Cir. 2009) (expressly rejecting the argument that, "because Congress enacted section 230 to overrule Stratton Oakmont, which held an internet service provider liable as a primary publisher, not a distributor, the statute does no more than overrule that decision's application of publisher liability" and that § 230 therefore leaves distributor liability intact). Rather, we have held, § 230(c)(1) broadly "precludes courts from treating internet service providers as publishers not just for the purposes of defamation law, with its particular distinction between primary and secondary publishers, but in general." Id. As we explained:
[W]hat matters is not the name of the cause of action-defamation versus negligence versus intentional infliction of emotional distress-what matters is whether the cause of action inherently requires the court to treat the defendant as the "publisher or speaker" of content provided by another. To put it another way, courts must ask whether the duty that the plaintiff alleges the defendant violated derives from the defendant's status
or conduct as a "publisher or speaker." If it does, section 230(c)(1) precludes liability.Id. at 1101-02.
Under this reading of § 230, we held that a plaintiff's cause of action impermissibly treats an internet service provider as the "publisher or speaker" of third-party content if it necessarily rests on a duty concerning "reviewing, editing, [or] deciding whether to publish or to withdraw from publication third-party content." Barnes, 570 F.3d at 1102. As noted above, the paradigmatic example of "a cause of action that treats a website proprietor as a publisher" within the meaning of § 230 "is a defamation action founded on the [proprietor's] hosting of defamatory third-party content," Doe v. Internet Brands, Inc., 824 F.3d 846, 851 (9th Cir. 2016), including hosting undertaken as a traditional "publisher" or as a "distributor," Barnes, 570 F.3d at 1104. But under the analysis we adopted in Barnes, § 230(c)(1)'s immunity also extends to any duty that "would necessarily require an internet company to monitor third-party content," HomeAway.com, Inc. v. City of Santa Monica, 918 F.3d 676, 682 (9th Cir. 2019) (emphasis added), or to remove such content, Barnes, 570 F.3d at 1103. As we stated in Barnes, "removing content is something publishers do, and to impose liability on the basis of such conduct necessarily involves treating the liable party as a publisher of the content it failed to remove." Id. "Subsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties." Id. at 1105 (emphasis added).
Barnes rejected the argument that, by construing subsection (c)(1)'s immunity as extending to actions concerning the monitoring and removal of content, we had rendered superfluous the distinct immunity set forth in subsection (c)(2), which directly concerns potential liability for "restrict[ing] access to or availability of material." 47 U.S.C. § 230(c)(2); see Barnes, 570 F.3d at 1105. We explained that, unlike the immunity granted by § 230(c)(1), the further immunity conferred by § 230(c)(2) would apply to content that was partly created by the internet provider itself and to access restrictions that went beyond what could fairly be characterized as "publishing or speaking." Barnes, 570 F.3d at 1105.
Our broad construction of § 230 has been subject to substantial criticism in a number of specific respects, but I am bound by our settled precedent, and I do not question it here. Moreover, regardless of any criticisms about the precise scope of that immunity, the central point, for present purposes, remains indisputable: § 230 confers a statutory immunity without which Meta could not practicably operate gigantic platforms such as Facebook and Instagram. The potential liability for defamatory content alone-not to mention other theories of platform host liability-would be so crushing as to preclude the operation of these platforms in anything resembling their current form. And, importantly, the immunity granted by § 230 is purely an act of congressional grace, because Meta has no plausible claim to a constitutional entitlement to full immunity for publishing or distributing constitutionally unprotected defamatory content. Cf. Woodhull Freedom Found. v. United States, 72 F.4th 1286, 1299 (D.C. Cir. 2023) (holding that Congress's denial of § 230 immunity to internet-provider conduct that amounts to aiding and abetting sex trafficking is not overbroad or facially unconstitutional).
In this respect, Meta's position stands in sharp contrast to that of a traditional publisher, such as a newspaper. A newspaper publisher's editorial decisions over the third-party content published in that paper are not broadly immunized by statute from constitutionally permissible liability. And because newspaper editors must, as a consequence, consider the potential liability associated with each third-party piece they publish, they necessarily limit and individually select the third-party speech that they are willing to include. In the absence of § 230's immunity, Meta would have to take comparable steps to manage and limit the enormous potential liability that could arise from its platforms' hosting of third-party speech by behaving more like a traditional newspaper. (At the very least, it would have to behave more like a traditional newsstand or bookstore, if one assumes, contrary to Stratton Oakmont, that the use of algorithmic tools and of other content-management measures is consistent with being a mere distributor rather than a publisher.) But in all events, in a world without § 230, Meta would almost certainly have to substantially reduce the massive scale of its third-party content hosting; it would presumably be much more pro-active than it already is about screening out content; and it would be much more selective about who it lets use its platforms and under what conditions. But with § 230's singular and broad immunity in place, Meta is freed up to exercise practical-and potentially arbitrary- control over the hosted content of the speech of more than 100 million people in the United States alone. In effect, by virtue of the special treatment afforded under § 230 to its massive platforms, Meta has been given the immunity of a conduit for the billions of postings that (in conduit-like fashion) it hosts, but that conduit-type immunity is coupled with what, in many respects, is functionally the editorial power of a publisher over everything on the platform.
The truly gigantic scale of Meta's platforms, and the enormous power that Meta thereby exercises over the speech of others, are thus direct consequences of, and critically dependent upon, the distinctive immunity reflected in § 230. That is, because such massive third-party-speech platforms could not operate on such a scale in the absence of something like § 230, the very ability of Meta to exercise such unrestrained power to censor the speech of so many tens of millions of other people exists only by virtue of the legislative grace reflected in § 230's broad immunity. Moreover, as the above discussion makes clear, it was Congress's declared purpose, in conferring such immunity, to allow platform operators to exercise this sort of wide discretion about what speech to allow and what to remove. In this respect, the immunity granted by § 230 differs critically from other government-enabled benefits, such as the limited liability associated with the corporate form. The generic benefits of incorporation are available to all for nearly every kind of substantive endeavor, and the limitation of liability associated with incorporation thus constitutes a form of generally applicable non-speech regulation. In sharp contrast, both in its purpose and in its effect, § 230's immunity is entirely a speech-related benefit-it is, by its very design, an immunity created precisely to give its beneficiaries the practical ability to censor the speech of large numbers of other persons. Against this backdrop, whenever Meta selectively censors the speech of third parties on its massive platforms, it is quite literally exercising a government-conferred special power over the speech of millions of others. The same simply cannot be said of newspapers making decisions about what stories to run or bookstores choosing what books to carry.
The majority suggests that if § 230 were "enough for state action, every large government contractor would be a state actor." Opin. at 27. However, as I shall explain below, I do not contend that § 230 alone suffices to establish state action here. See infra at 85. But the majority is also wrong in suggesting that the Government-conferred benefit here is comparable to the others that it cites. Companies that are dependent on a "favorable regulatory environment" or on significant Government contracts do not rely on a speech-related benefit that was purposely created to facilitate the suppression of third parties' speech.
I do not suggest that there is anything inappropriate in Meta's having taken advantage of § 230's immunity in building its mega-platforms. On the contrary, the fact that it and other companies have built such widely accessible platforms has created unprecedented practical opportunities for ordinary individuals to share their ideas with the world at large. That is, in a sense, exactly what § 230 aimed to accomplish, and in that particular respect the statute has been a success. But it is important to keep in mind that the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled.
IV
In my view, this key fact-viz., that Meta is effectively exercising a distinctive government-conferred power over others' speech when it decides whether and how to censor third-party speech on its vast platforms-makes a crucial difference in the state-action analysis. As I shall explain, the particular state-action test that is most relevant here is the one applied in Skinner v. Railway Labor Executives Ass'n, 489 U.S. 602 (1989). As relevant here, Skinner establishes that, where a private party exercises a distinctive government-conferred immunized power that is specifically targeted at particular rights of third parties, and those particular rights are ones that are protected from governmental infringement by the Constitution, then that private party's interactions with the Government as to how to exercise that power over those third parties' constitutional rights implicate constitutional standards and must comply with those standards. And under that analysis, CHD can adequately plead state action here.
A
Skinner involved two sets of regulations that were adopted to address the serious safety concerns presented by intoxicated railroad workers. 489 U.S. at 608-09. The first set, contained in "Subpart C" of the applicable regulations, imposed mandatory drug testing on employees involved in specified types of train accidents. Id. at 609. The second set, in "Subpart D," created a "permissive" regime of drug testing that was available against persons who were not covered by the mandatory provisions of Subpart C. Id. at 611 (emphasis added). Specifically, Subpart D authorized railroads to require drug testing of (1) an employee as to whom one or two supervisors had a "reasonable suspicion" that the employee was under the influence; or (2) an employee as to whom a supervisor had a reasonable suspicion that the employee contributed to an accident's "occurrence or severity." Id. Various organizations representing railroad workers challenged these regulations as a violation of the Fourth Amendment. Id. at 612. The district court rejected these challenges, but we reversed. Id. We concluded that, with the exception of the portion of Subpart D that authorized drug tests upon reasonable suspicion of impairment, the regulations did not require the "showing of individualized suspicion" that we held was "essential" to conducting such a search under the Fourth Amendment. Id. at 613. The Supreme Court then reversed our judgment to the extent that it had invalidated Subpart D.
At the outset, the Court had to address the threshold question whether drug tests conducted under these regulations implicated the protections of the Fourth Amendment. As the Court explained, "the Fourth Amendment does not apply to a search or seizure, even an arbitrary one, effected by a private party on his own initiative," but "the Amendment protects against such intrusions if the private party acted as an instrument or agent of the Government." Skinner, 489 U.S. at 614. In applying this overall standard, the Court first noted that the answer was easy as to the mandatory testing requirements in Subpart C: "A railroad that complies with the provisions of Subpart C of the regulations does so by compulsion of sovereign authority, and the lawfulness of its acts is controlled by the Fourth Amendment." Id. By contrast, the state-action issue with respect to Subpart D required a more extensive analysis.
As an initial matter, the Court explicitly rejected the defendants' argument that "the Fourth Amendment is not implicated by Subpart D of the regulations, as nothing in Subpart D compels any testing by private railroads." 489 U.S. at 614 (emphasis added). As the Court explained, "[t]he fact that the Government has not compelled a private party to perform a search does not, by itself, establish that the search is a private one." Id. at 615. Even in the absence of compulsion, a private party might be "deemed an agent or instrument of the Government for Fourth Amendment purposes" if the "degree of the Government's participation in the private party's activities" was sufficient, "in light of all the circumstances," to trigger the Constitution's protections. Id. at 614-15.
In concluding that "tests conducted by private railroads in reliance on Subpart D" should not be viewed as being "primarily the result of private initiative," the Court emphasized several considerations. First, the regulations in Subpart D broadly preempted "state laws, rules, or regulations covering the same subject matter," including rights recognized in collective bargaining agreements. 489 U.S. at 615. Indeed, the regulations specifically stated that railroads could "not bargain away the authority to perform tests granted by Subpart D." Id. By these measures, "[t]he Government ha[d] removed all legal barriers to the testing authorized by Subpart D." Id. Second, the regulations gave the Government "the right to receive certain biological samples and test results procured by railroads pursuant to Subpart D." Id. The Government had thereby "made plain not only its strong preference for testing, but also its desire to share the fruits of such intrusions." Id. Third, the regime created by Subpart D was compulsive vis-a-vis the employee: "a covered employee" was not "free to decline his employer's request to submit to breath or urine tests under the conditions set forth in Subpart D." Id. These three considerations-the Government's conferral of a special private power against others that was broadly immunized; the Government's interest in, and benefit from, the exercise of that power; and the compulsive nature of that power when wielded against other private parties-led the Court to conclude that a railroad's invocation of that power against its employees was sufficiently done with "the Government's encouragement, endorsement, and participation" to "implicate the Fourth Amendment." Id. at 615-16. Having found that the Fourth Amendment applied to searches conducted under Subpart D, the Court then concluded that the regulations did not violate the Fourth Amendment. Id. at 633-34.
B
Consideration of the same key three factors discussed in Skinner strongly supports the view that Meta's alleged interactions with the Government here are sufficient to implicate the First Amendment rights of CHD and those it represents, including Kennedy.
As I have explained, Meta here was not simply exercising the normal editorial control that goes with being an ordinary publisher who sifts through pre-publication submissions and affirmatively decides what to include in its publication. There are many such websites, and their exercise of such conventional editorial judgment and responsibility means that, from a practical point of view, their very ability to exist and to operate does not depend upon § 230's grace (even if they are a beneficiary of it). By contrast, Meta's construction of massive and widely available platforms for the hosting of the speech of enormous numbers of third parties necessarily means that those platforms exist and operate only by virtue of the immunity conferred by § 230. Thus, the authority to manage content on such mega-platforms is, in a very real sense, a government-conferred power, and the Government, through its broad preemption of "state laws, rules, or regulations covering the same subject matter," has intentionally "removed all legal barriers" to Meta's exercise of that power over the speech of others. Skinner, 489 U.S. at 615. And, as in Skinner, that government-conferred power is one that, by its very design, is specifically directed at third-party rights that are protected under the Constitution from encroachment by the Government. In that sense, the immunized power conferred is not akin, for example, to the generic benefits of the liability limitations of the corporate form. Section 230, by its structure and design, grants an immunized power specifically directed at censoring the speech of others.
Meta is therefore wrong in suggesting that this case does not involve the "exercise of some right or privilege created by the State." O'Handley v. Weber, 62 F.4th 1145, 1156 (9th Cir. 2023) (stating that a threshold question in the state-action inquiry is "whether the alleged constitutional violation was caused by the 'exercise of some right or privilege created by the State or by a rule of conduct imposed by the State or by a person for whom the State is responsible'" (citing Lugar, 457 U.S. at 937)).
Moreover, Meta's exercise of that power is clearly coercive from the point of view of the third parties whose speech is targeted. Like the railway employees in Skinner, they are not "free to decline" to have their speech removed from the platform if Meta chooses to do so. Skinner, 489 U.S. at 615.
The central question, then, is whether Skinner's last remaining factor-namely, governmental interest in, and direct benefit from, specific exercises of that power-is satisfied here. In addressing this factor, I think it is important to note a critical difference between this case and Skinner. In Skinner, the Government's interest in, and benefit from, the testing power conferred in Subpart D was built into the regulations themselves, because those regulations expressly gave the Government the right to obtain certain results of those tests. Id. The same cannot be said of the regime created by § 230, which provides for no formal governmental role in the exercise of the powers that it makes possible. Accordingly, unlike in Skinner, this important state-action factor is not automatically satisfied simply by virtue of the structure of the legal regime that the Government has created. On that basis, the district court below distinguished Skinner and held that it did not support a finding of state action here. The majority relies on similar reasoning, noting that § 230 neutrally protects whatever editorial decisions Meta makes with respect to third-party speech on its platforms. See Opin. at 29. But this reasoning overlooks the possibility that, even though a governmental benefit is not directly built into § 230's legal regime, the same relevant sort of governmental interest and benefit may be supplied with respect to particular communications and speakers by virtue of specific interactions between Meta and the Government concerning such communications or speakers. In view of the factual contentions summarized earlier, that line has plainly been crossed here. In particular, three distinct types of specific alleged interactions between Meta and the Government, taken together, strongly confirm the Government's interest in, and benefit from, many of the particular challenged exercises of that power. Skinner, 489 U.S. at 615.
First, the above-described allegations confirm that high-level Government officials made targeted requests, both publicly and privately, for Meta to take action specifically against the speech of CHD and Kennedy. In a private email, Flaherty pointedly complained that Meta was not doing enough to "stop[] the disinfo dozen," which was a clear reference to CHD and Kennedy. Psaki and Murthy likewise publicly called for Meta and other platforms to target the same "12 people who are producing 65 percent of antivaccine misinformation." In its private reassurances to White House and other Executive Branch officials, Meta repeatedly and specifically touted the targeted actions it had taken against CHD and Kennedy. For example, in an email to Flaherty, Meta attached a CHD Facebook post as an example of the sort of truthful "vaccine hesitan[t]" speech that it was targeting, just as it knew Flaherty wanted it to do. The day after Murthy's and Psaki's press conference, Meta followed up with Murthy's office and, during that conversation, specifically noted its targeted actions against Kennedy's vaccine-related speech. The following week, it again emphasized, in discussions with HHS, the additional steps it was taking against "the disinfo dozen." On this record, the Government expressed its specific interest in suppressing particular speech of particular speakers- including CHD and Kennedy-and Meta responded by underscoring the steps it had taken, and planned to take, to accomplish just that.
Second, under the allegations here, Meta worked extensively with Executive Branch officials to adjust and refine its criteria and practices with respect to limiting or suppressing vaccine-related speech. These were not simply informational exchanges in which Meta passed along its internal criteria for addressing such speech. Rather, Meta engaged in a dialogue with Executive Branch officials to develop and "begin enforcing" new policies with respect to Covid-vaccine-related speech. In particular, there was extensive discussion with Government officials about what "levers" to exercise against truthful "vaccine hesitancy content." And the Government was hardly a passive participant in these discussions. On the contrary, Flaherty and others repeatedly chastised Meta for not doing enough to suppress anti-vaccine content, unfavorably comparing Meta to other social media companies and underscoring the importance of Meta "mak[ing] people like me think you're taking action." The allegations here raise a plausible inference that Meta responded to such jawboning with appeasing efforts at modifying its policies and practices with respect to such speech.
Third, Meta went so far as to create an actual portal in which pre-selected Government officials could log in and then submit targeted requests for specific Covid-vaccine-related posts to be taken down. On its face, this system extended to truthful speech that the "whitelisted" Government officials nonetheless deemed to promote "Vaccine Discouragement."
It is also important to note that all of these actions took place against a backdrop of continuous legislative threats, at multiple levels, to limit or abolish the § 230 immunity upon which Meta's very ability to operate its mega-platforms critically depends. These included congressional hearings in both houses, at which Zuckerberg and other social media CEOs were called to testify, as well as statements from high-ranking officials including the House Speaker and relevant committee chairs in both houses. Although, by constitutional design, the legislative process is cumbersome, and legislative threats are therefore harder to carry out than others, the Speaker trenchantly observed that, when legislators raise the subject of § 230 reform with social media companies, "you really get their attention." While I agree that these various legislative comments and actions, taken in isolation, do not themselves constitute governmental compulsion of action under the traditional "'state compulsion' test," Lugar, 457 U.S. at 939; cf. Kennedy v. Warren, 66 F.4th 1199, 1207-12 (9th Cir. 2023), that is not dispositive. The Supreme Court held that state action was present in Skinner even though compulsion was concededly absent in that case. See Skinner, 489 U.S. at 615 (finding state action even while agreeing that the governmental regulations in Subpart D did not "compel[] a private party to perform a search"). And these frequent and high-level threats are certainly relevant in considering whether, "in light of all the circumstances," Meta's challenged actions here "are attributable to the Government or its agents" under Skinner's standards. Id. at 614 (citation omitted).
Taking these considerations together, the Government "made plain" its "strong preference" for particular exercises of Meta's § 230-immunized power over third-party speech on its mega-platforms. Skinner, 489 U.S. at 615. The Government directly communicated to Meta its specific interest in Meta acting to limit or remove (1) content that expressed particular Government-disfavored viewpoints on a specific subject (viz., vaccines in general and the Covid vaccines in particular), and (2) the speech of CHD and Kennedy on that subject in particular. With awareness of that focused interest, and of the benefits that the Government hoped to obtain if such speech were suppressed, Meta then affirmatively worked with the Government to refine Meta's policies and practices concerning such speech in a way that would be satisfactory to the Government, and it repeatedly touted to the Government its specific actions directly targeted against CHD and Kennedy. On these situationspecific facts, I think that Skinner's last remaining factor-a governmental interest in, and benefit from, particular exercises of the immunized power-is satisfied here.
A different and much more difficult state-action question would be presented if Meta had refrained from such affirmative interactions with the Government and instead was merely the passive recipient of criticism or haranguing from Government officials.
Accordingly, I would hold that, considered "in light of all the circumstances," Meta's interactions with the Government with respect to the suppression of specific categories of vaccine-related speech, and in particular the speech of CHD and Kennedy, "suffice to implicate the [First] Amendment." Skinner, 489 U.S. at 614, 616 (citation omitted). Moreover, that conclusion makes perfect sense when viewed from the converse perspective of what the Government must not do when it interacts with such megaplatforms. Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like. The Skinner-based analysis set forth above properly recognizes that, when the Government does so, and the platform operator responds accommodatingly, the First Amendment is implicated. Whether First Amendment standards have been violated was not reached by the district court and therefore is not squarely before us.
As I note below, however, CHD's allegations raise a plausible inference that the Government sought to restrict CHD's protected speech for the illegitimate purpose of suppressing disfavored speech that interfered with its policy objectives. See infra at 87.
Because I think that CHD could amend its complaint in a manner that states a cause of action for injunctive and declaratory relief based on the theory that Meta's abovedescribed interactions with the Government implicate the First Amendment rights of CHD, Kennedy, and CHD's other members, I would reverse the district court's judgment in favor of Meta to the extent it held to the contrary. Because, however, CHD's showing on this score extends only to Meta and not to Zuckerberg personally or to the Poynter Institute, I would affirm the dismissal of the direct injunctive claims against those two defendants. Of course, to the extent that CHD were to establish an ultimate entitlement to injunctive relief, Zuckerberg, the Poynter Institute, and others working in concert with Meta might nonetheless be incidentally covered by an injunction against Meta. And I would affirm, under Egbert, CHD's First-Amendment-based Bivens claim for monetary damages.
For many of the same reasons discussed above, CHD is clearly able to plead sufficient facts to assert Article III standing to seek injunctive and declaratory relief against Meta. See Lujan v. Defenders of Wildlife, 504 U.S. 555, 560-61 (1992) (noting that, at the pleading stage, the plaintiff may rely on "mere allegations" to establish the core elements of standing, which are (1) an injury in fact (2) that is fairly traceable to the defendant's challenged conduct and (3) that would be redressed by the requested relief); cf. Murthy v. Missouri, 144 S.Ct. 1972, 1986 (2024) (holding that, at the preliminary injunction stage, where "the parties have taken discovery," the plaintiff "must instead point to factual evidence"). CHD has properly rested its standing both on injuries to itself and injuries to members that it represents (such as Kennedy). See Hunt v. Washington State Apple Advert. Comm'n, 432 U.S. 333, 343 (1977). In contrast to Murthy, CHD has identified specific alleged instances in which Government interaction with Meta led to "discrete instance[s]" of censorship of CHD's and Kennedy's content. See Murthy, 144 S.Ct. at 1987. In February 2021, Meta responded to Slavitt's inquiry about restrictions on vaccine-hesitant content by stating that it would "begin enforcing" a new "policy" on that score, and it then proceeded to take down Kennedy's Instagram account the very next day. Two months later, Meta expressly reassured White House officials about the steps it was taking against vaccine-hesitant content, and it specifically attached, as an example, a post from CHD's Facebook page. A month later, a White House official complained that Meta was still not doing enough to stop the vaccine-hesitant speech of the "disinfo dozen," which included Kennedy. Murthy and Psaki then singled out the same dozen speakers, and Kennedy in particular, in their July 2021 press conference. That was followed by Meta informing HHS officials, a week later, that it had taken specific action against each one of the "disinfo dozen," including Kennedy, and thereafter Meta continued evaluating additional restrictions on Kennedy. And because, in contrast to Murthy, CHD seeks to enjoin the platform operator directly, it has "satisf[ied] traceability" by alleging that Meta continues to exclude CHD's and Kennedy's posts "under a policy that it adopted at the White House's behest," and an injunction directed at Meta will redress that injury. See Murthy, 144 S.Ct. 1996-97 & n.11.
I would likewise affirm the dismissal of CHD's claim under the Takings Clause. CHD has made no comparable showing of state action with respect to its assertion that the disabling of its donate button on its Facebook page was somehow a violation of the Takings Clause.
C
None of the additional contentions raised by the majority or by Defendants supports a contrary view with respect to the state-action issue.
Defendants rely heavily on our decision in O'Handley v. Weber, 62 F.4th 1145 (9th Cir. 2023), in which we held that state action was not present when a state election official submitted a request to Twitter to remove a particular post questioning the integrity of California's elections. Id. at 1154, 1160-61. But O 'Handtey was not presented with, and did not consider, the points addressed here about the significance of § 230 immunity under Skinner. Indeed, O'Handley never even cited either § 230 or Skinner. As such, O'Handley is distinguishable and not controlling here. See Cooper Indus., Inc. v. Aviall Servs., Inc., 543 U.S. 157, 170 (2004) ("Questions which merely lurk in the record, neither brought to the attention of the court nor ruled upon, are not to be considered as having been so decided as to constitute precedents." (citation omitted)); Guerrero v. RJM Acquisitions LLC, 499 F.3d 926, 938 (9th Cir. 2007) (holding that "unstated assumptions on non-litigated issues are not precedential holdings binding future decisions" (citation omitted)).
The majority concludes that the overall facts alleged here do not plausibly reflect the sort of compulsion that the caselaw typically requires to establish state action under the state compulsion test. See Opin. at 22-27; cf Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 68 (1963) (holding that actions of Rhode Island state commission, which "exhort[ed] booksellers" not to carry disfavored non-obscene titles, violated the First Amendment where the commission's communications were "phrased virtually as orders," were "invariably followed up by police visitations," and led distributor to acquiesce in a manner that the lower courts found "was not voluntary"). I am not entirely sure whether the majority is correct on this point, but I need not decide the issue, because it is ultimately irrelevant. As noted earlier, Skinner squarely held that state action was present there even in the absence of state compulsion. 489 U.S. at 615. And for the reasons that I have explained, the same is true here.
Although I thus do not reach the question of whether compulsion has been shown here, I note parenthetically that I am also not sure that the majority is correct in suggesting that, if compulsion had been established, Meta would not be a proper defendant for such a claim. See Opin. at 23. It may perhaps be true that the government-compelled private party is not the proper defendant in a suit for damages or in a suit challenging "governmental compulsion in the form of a generally applicable law." Sutton v. Providence St. Joseph Med. Ctr., 192 F.3d 826, 841 (9th Cir. 1999). And I recognize that the distributor was not named as a defendant in the suit for injunctive relief in Bantam Books. See Bantam Books, 372 U.S. at 60-61 (describing procedural history); see also Bantam Books, Inc. v. Sullivan, 176 A.2d 393, 395 (R.I. 1961) (noting that "[t]he distributor did not object to the commission's action and is not a party to the instant proceedings"). But I am not sure that the same conclusion follows if the compulsion test is applied in the unique context presented here, i.e., a suit for injunctive relief against a private party who, while exercising a government-granted ability to engage in mass censorship, is allegedly the subject of particularized coercive tactics from the Government. An injunction aimed at keeping the Government's coercive efforts away from such dangerous levers might conceivably be addressed either to the target of those efforts (thus counteracting them) or to the Government (or both). But, like the majority, I need not ultimately decide this issue.
Likewise, it does not matter whether the majority is correct in contending that the allegations here would not suffice to establish state action under the traditional "joint action test," Lugar, 457 U.S. at 939. That test, according to the majority, requires a showing that Meta agreed to take a "specific action on the government's say-so." See Opin. at 18 (emphasis added). Once again, the point is ultimately irrelevant. In Skinner, there was no such alleged agreement to violate any specific person's "rights in particular," see Opin. at 16 (citation omitted), and yet the Court found that state action was present. At most, the majority has established that particular alternative formulations of the state-action test, which were developed with different contexts in mind, are ill-suited to the unique circumstances presented here. That calls, as in Skinner, for a more tailored inquiry into whether, in light of those unique circumstances, state action is nonetheless present. As in Skinner, it is present here.
The majority also raises a broader concern that a finding of state action here would interfere with Meta's exercise of its own independent judgment over its platforms. Opin. at 18-19. Given that Meta may happen to share the Government's views that anti-vaccine speech and speakers should be limited or blocked on its platforms, the majority argues that Meta should not be disabled from implementing "those views simply because they happen to be shared by the government." See Opin. at 22. According to the majority, "Meta has a First Amendment right" to censor any speech on its platform with which it disagrees, see Opin. at 22 (emphasis added), and that it is solely up to Meta "to decide what, if any, limits should apply to speech on those platforms," see Opin. at 31. Indeed, Meta contends-and the majority appears to agree-that, under the Supreme Court's recent decision in Moody v. NetChoice, LLC, 144 S.Ct. 2383 (2024), "all of the actions challenged here are protected under the First Amendment" (emphasis added). In light of these considerations, the majority suggests that the various state-action tests should be narrowly construed so as to preserve Meta's asserted First-Amendment-based right to freely censor speech on its platform. These arguments rest, in my view, on an overstated view of Meta's relevant First Amendment rights, which do not give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.
It may well be true that an ordinary publisher or distributor would have a First Amendment right to acquiesce, if persuaded, in governmental requests not to publish or distribute particular works or speakers. The Court in Bantam Books put loadbearing weight on the fact that the Rhode Island courts had specifically found that the distributor's acquiescence in that case "was not voluntary." 372 U.S. at 68. It is therefore plausible to suppose that Bantam Books might have come out differently if the distributor had instead stated that it was persuaded by the state commission's views concerning what materials were worth distributing and that, agreeing with those views, the distributor affirmatively did not wish to promote the particular works at issue. Likewise, if a newspaper affirmatively chooses to be, in effect, a mouthpiece for a particular government or a particular official it supports, it may well have an absolute First Amendment right to do so. And to that extent, the majority would perhaps be correct in suggesting that a newspaper's constitutional right to opt to kill whatever article it wants cannot be overcome by relabeling, as "joint action," the newspaper's discussions with government officials over whether to bury a story.
But it does not follow from any of this that Meta has the exact same scope of constitutional freedom with respect to the speech of others on its mega-platforms. As I have repeatedly explained, when it comes to the operation of the sort of platforms at issue here, Meta simply does not occupy the same position as a traditional newspaper publisher or a book distributor. Rather, because its ability to operate its massive platform rests dispositively on the immunity granted as a matter of legislative grace in § 230, Meta is a bit of a novel legal chimera: it has the immunity of a conduit with respect to third-party speech, based precisely on the overriding legal premise that it is not a publisher; its platforms' massive scale and general availability to the public further make Meta resemble a conduit more than any sort of publisher; but Meta has, as a practical matter, a statutory freedom to suppress or delete any third-party speech while remaining liable only for its own affirmative speech. And, as the Supreme Court recently recognized, Meta is engaged in expressive activity protected by the First Amendment when it "curat[es]" Facebook's "NewsFeed" in a way that "create[s] a distinctive expressive offering." Moody, 144 S.Ct. at 2405. But I am aware of no historical precedent that meaningfully corresponds to such a hybrid entity, and I do not think we should simply assume that it has exactly the same constitutional rights with respect to third-party speech on its platforms as a newspaper publisher, a book distributor, or a parade organizer. Moody did not address the precise scope of Meta's First Amendment rights over its platform, see id. at 2407 (finding it unnecessary to resolve what level of scrutiny applied to the restrictions at issue there), and Moody did not confront or decide any question as to whether Meta has an absolute constitutional right to coordinate with the Government to suppress third-party speech on its platforms.
We likewise need not, and should not, decide in this case exactly what degree of First Amendment protection, if any, Meta has with respect to working with the Government to censor particular viewpoints or speakers on its platforms. It suffices for purposes of this case to note that the megaplatforms at issue here differ from traditional publishers or distributors in a critical respect that is directly relevant to the state-action question and that, in my view, warrants a different result-and that does so regardless of Meta's own invocation of the First Amendment. As I have explained, Meta would be better positioned to argue for the full constitutional freedom of a traditional publisher-including the freedom to agree to, and implement, the Government's censorship preferences-if it operated its website in all respects like a traditional publisher by individually reviewing, selecting, and limiting exactly what third-party speech it will publish. In such a circumstance, it would happen to have § 230 immunity, but (as with a newspaper) that immunity would not be essential to its very existence or ability to operate its platforms.
But in critical reliance on the Government's creation of an immunized censorship power, Meta instead chose to scale up its operations in a way that has produced gigantic platforms that comprise a unique assemblage of features that make it part conduit, part distributor, and part publisher. This central fact makes a difference. That is, I do not think that Meta's critical reliance on the government-created ability to engage in mass censorship is a factor that can properly be ignored either in the state-action inquiry or in assessing whether, like the above-described acquiescing newspaper publisher, Meta has, so to speak, a constitutional right to suppress third-party speech that the Government "persuades" it to censor. Although Meta's operational reliance on § 230's immunity is not alone enough to render Meta a state actor, that factor contributes positively towards a finding of state action when combined with other considerations. In particular, with this critical factor in place, if Meta then affirmatively engages with the Government as to how to exercise its government-granted authority in order to widely suppress particular subjects or speakers on its mega-platforms, that additional element suffices to cross over the state-action line and to implicate the First Amendment's protections with respect to the targeted speakers. And for that reason, I perceive no basis for concluding that Meta, in operating such unprecedented legally-hybrid platforms, has any sort of supervening constitutional right to team up with the Government to suppress the speech of particular speakers, or on particular topics, on such immunized mega-platforms.
The majority worries that treating Meta as a state actor here would contravene the underlying purpose of the stateaction doctrine, which is to "protect[] a robust sphere of individual liberty" within which private actors may operate. See Opin. at 12 (quoting Halleck, 587 U.S. at 808). But in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government's hands away from the tempting levers of censorship on these vast platforms. To be sure, it means that Meta does not have the "liberty" to work together with the Government in deciding how to suppress the speech of millions of people, but Meta otherwise retains its full authority to operate its platform within the bounds of the law. A contrary rule would mean that the Government can create a special immunized power for private entities to suppress speech on a mass scale and then request and receive, from those private entities, an ability to influence the exercise of those levers of censorship. That would thwart the First Amendment's core purpose to "prevent[] the government from tilting public debate in a preferred direction." Moody, 144 S.Ct. at 2407 (simplified).
The majority suggests that finding state action here would produce a parade of horribles, because it would supposedly hamper the Government's ability to work with platform operators to restrict minors' access to pornographic speech or to address other types of speech as to which the Government has legitimate concerns. See Opin. at 30-31. But saying that the First Amendment is implicated is not the same as saying that it is violated. Where the category of speech at issue is either unprotected (e.g., child pornography, fraudulent advertising) or is otherwise subject to legitimate direct regulation by the Government, see Reno v. ACLU, 521 U.S. 844, 869 (1997) (reaffirming that the Government has "'a compelling interest in protecting the physical and psychological well-being of minors' which extend[s] to shielding them from indecent messages that are not obscene by adult standards" (citation omitted)), or where the Government's interest involves, for example, malign foreign actors operating outside the United States, see Agency for Int'l Dev. v. Alliance for Open Soc'y Int'l, Inc., 591 U.S. 430, 439 (2020) (holding that "foreign organizations operating abroad do not possess constitutional rights"), the Government may also properly seek to achieve its legitimate ends indirectly, through consultation with operators of mega-platforms. What allegedly occurred here, however, is quite different, because Meta and the Government worked cooperatively together to suppress the concededly truthful speech of Americans concerning vaccines, and the Government sought to do so for the illegitimate purpose of dampening opposition to the Government's preferred vaccine policies. See Rosenberger v. Rector &Visitors of Univ. of Va., 515 U.S. 819, 829 (1995) ("The government must abstain from regulating speech when the specific motivating ideology or the opinion or perspective of the speaker is the rationale for the restriction."). Here, it is alleged, the Government worked with "private persons to accomplish what it is constitutionally forbidden to accomplish." Norwood v. Harrison, 413 U.S. 455, 465 (1973) (citation omitted); see also National Rifle Ass'n v. Vullo, 602 U.S. 175, 190 (2024) (stating that "a government official cannot do indirectly what she is barred from doing directly").
V
I concur in the majority's opinion to the extent that it upholds the district court's dismissal of CHD's Lanham Act claim, its RICO claim, and its claims against additional Defendant Science Feedback. For the reasons I have explained, I would affirm the dismissal of the Takings Clause claim; the dismissal of the First Amendment claims against Zuckerberg and the Poynter Institute; and the Bivens First Amendment claim for monetary damages against Meta. But I would reverse as to CHD's First Amendment claim for injunctive and declaratory relief against Meta, and to that extent, I respectfully dissent.
[*]The Honorable Edward R. Korman, United States District Judge for the Eastern District of New York, sitting by designation.
[**] This summary constitutes no part of the opinion of the court. It has been prepared by court staff for the convenience of the reader.